00:00:00.000 Started by upstream project "autotest-per-patch" build number 132381 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.133 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.176 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.246 > git --version # 'git version 2.39.2' 00:00:00.246 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.270 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.270 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.216 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.227 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.237 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.237 > git config core.sparsecheckout # timeout=10 00:00:06.247 > git read-tree -mu HEAD # timeout=10 00:00:06.262 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.283 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.284 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.389 [Pipeline] Start of Pipeline 00:00:06.401 [Pipeline] library 00:00:06.402 Loading library shm_lib@master 00:00:06.402 Library shm_lib@master is cached. Copying from home. 00:00:06.420 [Pipeline] node 00:00:06.430 Running on WFP34 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.432 [Pipeline] { 00:00:06.440 [Pipeline] catchError 00:00:06.441 [Pipeline] { 00:00:06.450 [Pipeline] wrap 00:00:06.456 [Pipeline] { 00:00:06.461 [Pipeline] stage 00:00:06.462 [Pipeline] { (Prologue) 00:00:06.652 [Pipeline] sh 00:00:06.934 + logger -p user.info -t JENKINS-CI 00:00:06.954 [Pipeline] echo 00:00:06.956 Node: WFP34 00:00:06.963 [Pipeline] sh 00:00:07.272 [Pipeline] setCustomBuildProperty 00:00:07.289 [Pipeline] echo 00:00:07.290 Cleanup processes 00:00:07.296 [Pipeline] sh 00:00:07.580 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.580 1408870 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.593 [Pipeline] sh 00:00:07.878 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.878 ++ grep -v 'sudo pgrep' 00:00:07.878 ++ awk '{print $1}' 00:00:07.878 + sudo kill -9 00:00:07.878 + true 00:00:07.893 [Pipeline] cleanWs 00:00:07.904 [WS-CLEANUP] Deleting project workspace... 00:00:07.904 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.910 [WS-CLEANUP] done 00:00:07.915 [Pipeline] setCustomBuildProperty 00:00:07.931 [Pipeline] sh 00:00:08.215 + sudo git config --global --replace-all safe.directory '*' 00:00:08.305 [Pipeline] httpRequest 00:00:08.692 [Pipeline] echo 00:00:08.694 Sorcerer 10.211.164.20 is alive 00:00:08.703 [Pipeline] retry 00:00:08.706 [Pipeline] { 00:00:08.720 [Pipeline] httpRequest 00:00:08.725 HttpMethod: GET 00:00:08.725 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.726 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.751 Response Code: HTTP/1.1 200 OK 00:00:08.751 Success: Status code 200 is in the accepted range: 200,404 00:00:08.751 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.953 [Pipeline] } 00:00:23.972 [Pipeline] // retry 00:00:23.980 [Pipeline] sh 00:00:24.265 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.283 [Pipeline] httpRequest 00:00:24.764 [Pipeline] echo 00:00:24.766 Sorcerer 10.211.164.20 is alive 00:00:24.778 [Pipeline] retry 00:00:24.780 [Pipeline] { 00:00:24.794 [Pipeline] httpRequest 00:00:24.799 HttpMethod: GET 00:00:24.799 URL: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:24.800 Sending request to url: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:24.825 Response Code: HTTP/1.1 200 OK 00:00:24.825 Success: Status code 200 is in the accepted range: 200,404 00:00:24.826 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:13.716 [Pipeline] } 00:01:13.734 [Pipeline] // retry 00:01:13.742 [Pipeline] sh 00:01:14.025 + tar --no-same-owner -xf spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:16.568 [Pipeline] sh 00:01:16.851 + git -C spdk log --oneline -n5 00:01:16.852 097badaeb test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:01:16.852 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:01:16.852 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:16.852 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:01:16.852 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:01:16.863 [Pipeline] } 00:01:16.877 [Pipeline] // stage 00:01:16.886 [Pipeline] stage 00:01:16.888 [Pipeline] { (Prepare) 00:01:16.907 [Pipeline] writeFile 00:01:16.923 [Pipeline] sh 00:01:17.208 + logger -p user.info -t JENKINS-CI 00:01:17.223 [Pipeline] sh 00:01:17.511 + logger -p user.info -t JENKINS-CI 00:01:17.523 [Pipeline] sh 00:01:17.808 + cat autorun-spdk.conf 00:01:17.808 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.808 SPDK_TEST_NVMF=1 00:01:17.808 SPDK_TEST_NVME_CLI=1 00:01:17.808 SPDK_TEST_NVMF_NICS=mlx5 00:01:17.808 SPDK_RUN_UBSAN=1 00:01:17.808 NET_TYPE=phy 00:01:17.816 RUN_NIGHTLY=0 00:01:17.821 [Pipeline] readFile 00:01:17.848 [Pipeline] withEnv 00:01:17.850 [Pipeline] { 00:01:17.862 [Pipeline] sh 00:01:18.149 + set -ex 00:01:18.149 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:18.149 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:18.149 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.149 ++ SPDK_TEST_NVMF=1 00:01:18.149 ++ SPDK_TEST_NVME_CLI=1 00:01:18.149 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:18.149 ++ SPDK_RUN_UBSAN=1 00:01:18.149 ++ NET_TYPE=phy 00:01:18.149 ++ RUN_NIGHTLY=0 00:01:18.149 + case $SPDK_TEST_NVMF_NICS in 00:01:18.149 + DRIVERS=mlx5_ib 00:01:18.149 + [[ -n mlx5_ib ]] 00:01:18.149 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:18.149 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.717 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.717 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.717 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.717 + true 00:01:24.717 + for D in $DRIVERS 00:01:24.717 + sudo modprobe mlx5_ib 00:01:24.717 + exit 0 00:01:24.727 [Pipeline] } 00:01:24.743 [Pipeline] // withEnv 00:01:24.748 [Pipeline] } 00:01:24.763 [Pipeline] // stage 00:01:24.773 [Pipeline] catchError 00:01:24.775 [Pipeline] { 00:01:24.789 [Pipeline] timeout 00:01:24.790 Timeout set to expire in 1 hr 0 min 00:01:24.791 [Pipeline] { 00:01:24.805 [Pipeline] stage 00:01:24.808 [Pipeline] { (Tests) 00:01:24.823 [Pipeline] sh 00:01:25.110 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.110 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.110 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:25.110 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:25.110 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:25.110 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:25.110 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:25.110 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:25.110 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:25.110 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:25.110 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:25.110 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:25.110 + source /etc/os-release 00:01:25.110 ++ NAME='Fedora Linux' 00:01:25.110 ++ VERSION='39 (Cloud Edition)' 00:01:25.110 ++ ID=fedora 00:01:25.110 ++ VERSION_ID=39 00:01:25.110 ++ VERSION_CODENAME= 00:01:25.110 ++ PLATFORM_ID=platform:f39 00:01:25.110 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.110 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.110 ++ LOGO=fedora-logo-icon 00:01:25.110 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.110 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.110 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.110 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.110 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.110 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.110 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.110 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.110 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.110 ++ SUPPORT_END=2024-11-12 00:01:25.110 ++ VARIANT='Cloud Edition' 00:01:25.110 ++ VARIANT_ID=cloud 00:01:25.110 + uname -a 00:01:25.110 Linux spdk-wfp-34 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:25.110 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:27.645 Hugepages 00:01:27.645 node hugesize free / total 00:01:27.645 node0 1048576kB 0 / 0 00:01:27.645 node0 2048kB 0 / 0 00:01:27.645 node1 1048576kB 0 / 0 00:01:27.645 node1 2048kB 0 / 0 00:01:27.645 00:01:27.645 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.645 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:27.645 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:27.645 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:27.645 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:27.645 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:27.645 + rm -f /tmp/spdk-ld-path 00:01:27.645 + source autorun-spdk.conf 00:01:27.645 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.645 ++ SPDK_TEST_NVMF=1 00:01:27.645 ++ SPDK_TEST_NVME_CLI=1 00:01:27.645 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:27.645 ++ SPDK_RUN_UBSAN=1 00:01:27.645 ++ NET_TYPE=phy 00:01:27.645 ++ RUN_NIGHTLY=0 00:01:27.645 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.645 + [[ -n '' ]] 00:01:27.645 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:27.645 + for M in /var/spdk/build-*-manifest.txt 00:01:27.645 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:27.645 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.645 + for M in /var/spdk/build-*-manifest.txt 00:01:27.645 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.645 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.645 + for M in /var/spdk/build-*-manifest.txt 00:01:27.645 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.645 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:27.645 ++ uname 00:01:27.645 + [[ Linux == \L\i\n\u\x ]] 00:01:27.645 + sudo dmesg -T 00:01:27.645 + sudo dmesg --clear 00:01:27.905 + dmesg_pid=1410264 00:01:27.905 + [[ Fedora Linux == FreeBSD ]] 00:01:27.905 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.905 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.905 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.905 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.905 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.905 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.905 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.905 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.905 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.905 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.905 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.905 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.905 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.905 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.905 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.905 + sudo dmesg -Tw 00:01:27.905 11:24:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.905 11:24:31 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:27.905 11:24:31 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:27.905 11:24:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.905 11:24:31 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:27.905 11:24:31 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.905 11:24:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:27.905 11:24:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.905 11:24:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.905 11:24:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.905 11:24:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.905 11:24:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.905 11:24:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.905 11:24:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.905 11:24:31 -- paths/export.sh@5 -- $ export PATH 00:01:27.905 11:24:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.905 11:24:31 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:27.906 11:24:31 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:27.906 11:24:31 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732098271.XXXXXX 00:01:27.906 11:24:31 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732098271.fk8ZWQ 00:01:27.906 11:24:31 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:27.906 11:24:31 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:27.906 11:24:31 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:27.906 11:24:31 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.906 11:24:31 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.906 11:24:31 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:27.906 11:24:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.906 11:24:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.906 11:24:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:27.906 11:24:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:27.906 11:24:31 -- pm/common@17 -- $ local monitor 00:01:27.906 11:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.906 11:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.906 11:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.906 11:24:31 -- pm/common@21 -- $ date +%s 00:01:27.906 11:24:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.906 11:24:31 -- pm/common@21 -- $ date +%s 00:01:27.906 11:24:31 -- pm/common@25 -- $ sleep 1 00:01:27.906 11:24:31 -- pm/common@21 -- $ date +%s 00:01:27.906 11:24:31 -- pm/common@21 -- $ date +%s 00:01:27.906 11:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732098271 00:01:27.906 11:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732098271 00:01:27.906 11:24:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732098271 00:01:27.906 11:24:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732098271 00:01:27.906 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732098271_collect-cpu-load.pm.log 00:01:27.906 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732098271_collect-vmstat.pm.log 00:01:28.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732098271_collect-cpu-temp.pm.log 00:01:28.164 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732098271_collect-bmc-pm.bmc.pm.log 00:01:29.100 11:24:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:29.100 11:24:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.100 11:24:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.100 11:24:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:29.100 11:24:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.100 Wed Nov 20 10:24:32 AM UTC 2024 00:01:29.100 11:24:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.100 v25.01-pre-206-g097badaeb 00:01:29.100 11:24:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.100 11:24:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.100 11:24:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.100 11:24:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.100 11:24:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.100 11:24:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.100 ************************************ 00:01:29.100 START TEST ubsan 00:01:29.100 ************************************ 00:01:29.100 11:24:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.100 using ubsan 00:01:29.100 00:01:29.100 real 0m0.001s 00:01:29.100 user 0m0.001s 00:01:29.100 sys 0m0.000s 00:01:29.100 11:24:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.100 11:24:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.100 ************************************ 00:01:29.100 END TEST ubsan 00:01:29.100 ************************************ 00:01:29.100 11:24:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.100 11:24:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.100 11:24:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.100 11:24:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:29.100 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:29.100 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:29.669 Using 'verbs' RDMA provider 00:01:45.220 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.433 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.433 Creating mk/config.mk...done. 00:01:57.433 Creating mk/cc.flags.mk...done. 00:01:57.433 Type 'make' to build. 00:01:57.433 11:25:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:57.433 11:25:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.433 11:25:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.433 11:25:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.433 ************************************ 00:01:57.433 START TEST make 00:01:57.433 ************************************ 00:01:57.433 11:25:00 make -- common/autotest_common.sh@1129 -- $ make -j72 00:01:57.433 make[1]: Nothing to be done for 'all'. 00:02:05.563 The Meson build system 00:02:05.563 Version: 1.5.0 00:02:05.563 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:05.563 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:05.563 Build type: native build 00:02:05.563 Program cat found: YES (/usr/bin/cat) 00:02:05.563 Project name: DPDK 00:02:05.563 Project version: 24.03.0 00:02:05.563 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.563 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.564 Host machine cpu family: x86_64 00:02:05.564 Host machine cpu: x86_64 00:02:05.564 Message: ## Building in Developer Mode ## 00:02:05.564 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.564 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.564 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.564 Program python3 found: YES (/usr/bin/python3) 00:02:05.564 Program cat found: YES (/usr/bin/cat) 00:02:05.564 Compiler for C supports arguments -march=native: YES 00:02:05.564 Checking for size of "void *" : 8 00:02:05.564 Checking for size of "void *" : 8 (cached) 00:02:05.564 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.564 Library m found: YES 00:02:05.564 Library numa found: YES 00:02:05.564 Has header "numaif.h" : YES 00:02:05.564 Library fdt found: NO 00:02:05.564 Library execinfo found: NO 00:02:05.564 Has header "execinfo.h" : YES 00:02:05.564 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.564 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.564 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.564 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.564 Run-time dependency openssl found: YES 3.1.1 00:02:05.564 Run-time dependency libpcap found: YES 1.10.4 00:02:05.564 Has header "pcap.h" with dependency libpcap: YES 00:02:05.564 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.564 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.564 Compiler for C supports arguments -Wformat: YES 00:02:05.564 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.564 Compiler for C supports arguments -Wformat-security: NO 00:02:05.564 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.564 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.564 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.564 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.564 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.564 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.564 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.564 Compiler for C supports arguments -Wundef: YES 00:02:05.564 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.564 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.564 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.564 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.564 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.564 Program objdump found: YES (/usr/bin/objdump) 00:02:05.564 Compiler for C supports arguments -mavx512f: YES 00:02:05.564 Checking if "AVX512 checking" compiles: YES 00:02:05.564 Fetching value of define "__SSE4_2__" : 1 00:02:05.564 Fetching value of define "__AES__" : 1 00:02:05.564 Fetching value of define "__AVX__" : 1 00:02:05.564 Fetching value of define "__AVX2__" : 1 00:02:05.564 Fetching value of define "__AVX512BW__" : 1 00:02:05.564 Fetching value of define "__AVX512CD__" : 1 00:02:05.564 Fetching value of define "__AVX512DQ__" : 1 00:02:05.564 Fetching value of define "__AVX512F__" : 1 00:02:05.564 Fetching value of define "__AVX512VL__" : 1 00:02:05.564 Fetching value of define "__PCLMUL__" : 1 00:02:05.564 Fetching value of define "__RDRND__" : 1 00:02:05.564 Fetching value of define "__RDSEED__" : 1 00:02:05.564 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.564 Fetching value of define "__znver1__" : (undefined) 00:02:05.564 Fetching value of define "__znver2__" : (undefined) 00:02:05.564 Fetching value of define "__znver3__" : (undefined) 00:02:05.564 Fetching value of define "__znver4__" : (undefined) 00:02:05.564 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.564 Message: lib/log: Defining dependency "log" 00:02:05.564 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.564 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.564 Checking for function "getentropy" : NO 00:02:05.564 Message: lib/eal: Defining dependency "eal" 00:02:05.564 Message: lib/ring: Defining dependency "ring" 00:02:05.564 Message: lib/rcu: Defining dependency "rcu" 00:02:05.564 Message: lib/mempool: Defining dependency "mempool" 00:02:05.564 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.564 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.564 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.564 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.564 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.564 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.564 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.564 Compiler for C supports arguments -mpclmul: YES 00:02:05.564 Compiler for C supports arguments -maes: YES 00:02:05.564 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.564 Compiler for C supports arguments -mavx512bw: YES 00:02:05.564 Compiler for C supports arguments -mavx512dq: YES 00:02:05.564 Compiler for C supports arguments -mavx512vl: YES 00:02:05.564 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.564 Compiler for C supports arguments -mavx2: YES 00:02:05.564 Compiler for C supports arguments -mavx: YES 00:02:05.564 Message: lib/net: Defining dependency "net" 00:02:05.564 Message: lib/meter: Defining dependency "meter" 00:02:05.564 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.564 Message: lib/pci: Defining dependency "pci" 00:02:05.564 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.564 Message: lib/hash: Defining dependency "hash" 00:02:05.564 Message: lib/timer: Defining dependency "timer" 00:02:05.564 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.564 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.564 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.564 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.564 Message: lib/power: Defining dependency "power" 00:02:05.564 Message: lib/reorder: Defining dependency "reorder" 00:02:05.564 Message: lib/security: Defining dependency "security" 00:02:05.564 Has header "linux/userfaultfd.h" : YES 00:02:05.564 Has header "linux/vduse.h" : YES 00:02:05.564 Message: lib/vhost: Defining dependency "vhost" 00:02:05.564 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.564 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.564 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.564 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.564 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.564 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.564 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.564 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.564 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.564 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.564 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.564 Configuring doxy-api-html.conf using configuration 00:02:05.564 Configuring doxy-api-man.conf using configuration 00:02:05.564 Program mandb found: YES (/usr/bin/mandb) 00:02:05.564 Program sphinx-build found: NO 00:02:05.564 Configuring rte_build_config.h using configuration 00:02:05.564 Message: 00:02:05.564 ================= 00:02:05.564 Applications Enabled 00:02:05.564 ================= 00:02:05.564 00:02:05.564 apps: 00:02:05.564 00:02:05.564 00:02:05.564 Message: 00:02:05.564 ================= 00:02:05.564 Libraries Enabled 00:02:05.564 ================= 00:02:05.564 00:02:05.564 libs: 00:02:05.564 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.564 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.564 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.564 00:02:05.564 Message: 00:02:05.564 =============== 00:02:05.564 Drivers Enabled 00:02:05.564 =============== 00:02:05.564 00:02:05.564 common: 00:02:05.564 00:02:05.564 bus: 00:02:05.564 pci, vdev, 00:02:05.564 mempool: 00:02:05.564 ring, 00:02:05.564 dma: 00:02:05.564 00:02:05.564 net: 00:02:05.564 00:02:05.564 crypto: 00:02:05.564 00:02:05.564 compress: 00:02:05.564 00:02:05.564 vdpa: 00:02:05.564 00:02:05.564 00:02:05.564 Message: 00:02:05.564 ================= 00:02:05.564 Content Skipped 00:02:05.564 ================= 00:02:05.564 00:02:05.564 apps: 00:02:05.564 dumpcap: explicitly disabled via build config 00:02:05.564 graph: explicitly disabled via build config 00:02:05.564 pdump: explicitly disabled via build config 00:02:05.564 proc-info: explicitly disabled via build config 00:02:05.564 test-acl: explicitly disabled via build config 00:02:05.564 test-bbdev: explicitly disabled via build config 00:02:05.564 test-cmdline: explicitly disabled via build config 00:02:05.564 test-compress-perf: explicitly disabled via build config 00:02:05.564 test-crypto-perf: explicitly disabled via build config 00:02:05.564 test-dma-perf: explicitly disabled via build config 00:02:05.564 test-eventdev: explicitly disabled via build config 00:02:05.564 test-fib: explicitly disabled via build config 00:02:05.564 test-flow-perf: explicitly disabled via build config 00:02:05.564 test-gpudev: explicitly disabled via build config 00:02:05.564 test-mldev: explicitly disabled via build config 00:02:05.564 test-pipeline: explicitly disabled via build config 00:02:05.565 test-pmd: explicitly disabled via build config 00:02:05.565 test-regex: explicitly disabled via build config 00:02:05.565 test-sad: explicitly disabled via build config 00:02:05.565 test-security-perf: explicitly disabled via build config 00:02:05.565 00:02:05.565 libs: 00:02:05.565 argparse: explicitly disabled via build config 00:02:05.565 metrics: explicitly disabled via build config 00:02:05.565 acl: explicitly disabled via build config 00:02:05.565 bbdev: explicitly disabled via build config 00:02:05.565 bitratestats: explicitly disabled via build config 00:02:05.565 bpf: explicitly disabled via build config 00:02:05.565 cfgfile: explicitly disabled via build config 00:02:05.565 distributor: explicitly disabled via build config 00:02:05.565 efd: explicitly disabled via build config 00:02:05.565 eventdev: explicitly disabled via build config 00:02:05.565 dispatcher: explicitly disabled via build config 00:02:05.565 gpudev: explicitly disabled via build config 00:02:05.565 gro: explicitly disabled via build config 00:02:05.565 gso: explicitly disabled via build config 00:02:05.565 ip_frag: explicitly disabled via build config 00:02:05.565 jobstats: explicitly disabled via build config 00:02:05.565 latencystats: explicitly disabled via build config 00:02:05.565 lpm: explicitly disabled via build config 00:02:05.565 member: explicitly disabled via build config 00:02:05.565 pcapng: explicitly disabled via build config 00:02:05.565 rawdev: explicitly disabled via build config 00:02:05.565 regexdev: explicitly disabled via build config 00:02:05.565 mldev: explicitly disabled via build config 00:02:05.565 rib: explicitly disabled via build config 00:02:05.565 sched: explicitly disabled via build config 00:02:05.565 stack: explicitly disabled via build config 00:02:05.565 ipsec: explicitly disabled via build config 00:02:05.565 pdcp: explicitly disabled via build config 00:02:05.565 fib: explicitly disabled via build config 00:02:05.565 port: explicitly disabled via build config 00:02:05.565 pdump: explicitly disabled via build config 00:02:05.565 table: explicitly disabled via build config 00:02:05.565 pipeline: explicitly disabled via build config 00:02:05.565 graph: explicitly disabled via build config 00:02:05.565 node: explicitly disabled via build config 00:02:05.565 00:02:05.565 drivers: 00:02:05.565 common/cpt: not in enabled drivers build config 00:02:05.565 common/dpaax: not in enabled drivers build config 00:02:05.565 common/iavf: not in enabled drivers build config 00:02:05.565 common/idpf: not in enabled drivers build config 00:02:05.565 common/ionic: not in enabled drivers build config 00:02:05.565 common/mvep: not in enabled drivers build config 00:02:05.565 common/octeontx: not in enabled drivers build config 00:02:05.565 bus/auxiliary: not in enabled drivers build config 00:02:05.565 bus/cdx: not in enabled drivers build config 00:02:05.565 bus/dpaa: not in enabled drivers build config 00:02:05.565 bus/fslmc: not in enabled drivers build config 00:02:05.565 bus/ifpga: not in enabled drivers build config 00:02:05.565 bus/platform: not in enabled drivers build config 00:02:05.565 bus/uacce: not in enabled drivers build config 00:02:05.565 bus/vmbus: not in enabled drivers build config 00:02:05.565 common/cnxk: not in enabled drivers build config 00:02:05.565 common/mlx5: not in enabled drivers build config 00:02:05.565 common/nfp: not in enabled drivers build config 00:02:05.565 common/nitrox: not in enabled drivers build config 00:02:05.565 common/qat: not in enabled drivers build config 00:02:05.565 common/sfc_efx: not in enabled drivers build config 00:02:05.565 mempool/bucket: not in enabled drivers build config 00:02:05.565 mempool/cnxk: not in enabled drivers build config 00:02:05.565 mempool/dpaa: not in enabled drivers build config 00:02:05.565 mempool/dpaa2: not in enabled drivers build config 00:02:05.565 mempool/octeontx: not in enabled drivers build config 00:02:05.565 mempool/stack: not in enabled drivers build config 00:02:05.565 dma/cnxk: not in enabled drivers build config 00:02:05.565 dma/dpaa: not in enabled drivers build config 00:02:05.565 dma/dpaa2: not in enabled drivers build config 00:02:05.565 dma/hisilicon: not in enabled drivers build config 00:02:05.565 dma/idxd: not in enabled drivers build config 00:02:05.565 dma/ioat: not in enabled drivers build config 00:02:05.565 dma/skeleton: not in enabled drivers build config 00:02:05.565 net/af_packet: not in enabled drivers build config 00:02:05.565 net/af_xdp: not in enabled drivers build config 00:02:05.565 net/ark: not in enabled drivers build config 00:02:05.565 net/atlantic: not in enabled drivers build config 00:02:05.565 net/avp: not in enabled drivers build config 00:02:05.565 net/axgbe: not in enabled drivers build config 00:02:05.565 net/bnx2x: not in enabled drivers build config 00:02:05.565 net/bnxt: not in enabled drivers build config 00:02:05.565 net/bonding: not in enabled drivers build config 00:02:05.565 net/cnxk: not in enabled drivers build config 00:02:05.565 net/cpfl: not in enabled drivers build config 00:02:05.565 net/cxgbe: not in enabled drivers build config 00:02:05.565 net/dpaa: not in enabled drivers build config 00:02:05.565 net/dpaa2: not in enabled drivers build config 00:02:05.565 net/e1000: not in enabled drivers build config 00:02:05.565 net/ena: not in enabled drivers build config 00:02:05.565 net/enetc: not in enabled drivers build config 00:02:05.565 net/enetfec: not in enabled drivers build config 00:02:05.565 net/enic: not in enabled drivers build config 00:02:05.565 net/failsafe: not in enabled drivers build config 00:02:05.565 net/fm10k: not in enabled drivers build config 00:02:05.565 net/gve: not in enabled drivers build config 00:02:05.565 net/hinic: not in enabled drivers build config 00:02:05.565 net/hns3: not in enabled drivers build config 00:02:05.565 net/i40e: not in enabled drivers build config 00:02:05.565 net/iavf: not in enabled drivers build config 00:02:05.565 net/ice: not in enabled drivers build config 00:02:05.565 net/idpf: not in enabled drivers build config 00:02:05.565 net/igc: not in enabled drivers build config 00:02:05.565 net/ionic: not in enabled drivers build config 00:02:05.565 net/ipn3ke: not in enabled drivers build config 00:02:05.565 net/ixgbe: not in enabled drivers build config 00:02:05.565 net/mana: not in enabled drivers build config 00:02:05.565 net/memif: not in enabled drivers build config 00:02:05.565 net/mlx4: not in enabled drivers build config 00:02:05.565 net/mlx5: not in enabled drivers build config 00:02:05.565 net/mvneta: not in enabled drivers build config 00:02:05.565 net/mvpp2: not in enabled drivers build config 00:02:05.565 net/netvsc: not in enabled drivers build config 00:02:05.565 net/nfb: not in enabled drivers build config 00:02:05.565 net/nfp: not in enabled drivers build config 00:02:05.565 net/ngbe: not in enabled drivers build config 00:02:05.565 net/null: not in enabled drivers build config 00:02:05.565 net/octeontx: not in enabled drivers build config 00:02:05.565 net/octeon_ep: not in enabled drivers build config 00:02:05.565 net/pcap: not in enabled drivers build config 00:02:05.565 net/pfe: not in enabled drivers build config 00:02:05.565 net/qede: not in enabled drivers build config 00:02:05.565 net/ring: not in enabled drivers build config 00:02:05.565 net/sfc: not in enabled drivers build config 00:02:05.565 net/softnic: not in enabled drivers build config 00:02:05.565 net/tap: not in enabled drivers build config 00:02:05.565 net/thunderx: not in enabled drivers build config 00:02:05.565 net/txgbe: not in enabled drivers build config 00:02:05.565 net/vdev_netvsc: not in enabled drivers build config 00:02:05.565 net/vhost: not in enabled drivers build config 00:02:05.565 net/virtio: not in enabled drivers build config 00:02:05.565 net/vmxnet3: not in enabled drivers build config 00:02:05.565 raw/*: missing internal dependency, "rawdev" 00:02:05.565 crypto/armv8: not in enabled drivers build config 00:02:05.565 crypto/bcmfs: not in enabled drivers build config 00:02:05.565 crypto/caam_jr: not in enabled drivers build config 00:02:05.565 crypto/ccp: not in enabled drivers build config 00:02:05.565 crypto/cnxk: not in enabled drivers build config 00:02:05.565 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.565 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.565 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.565 crypto/mlx5: not in enabled drivers build config 00:02:05.565 crypto/mvsam: not in enabled drivers build config 00:02:05.565 crypto/nitrox: not in enabled drivers build config 00:02:05.565 crypto/null: not in enabled drivers build config 00:02:05.565 crypto/octeontx: not in enabled drivers build config 00:02:05.565 crypto/openssl: not in enabled drivers build config 00:02:05.565 crypto/scheduler: not in enabled drivers build config 00:02:05.565 crypto/uadk: not in enabled drivers build config 00:02:05.565 crypto/virtio: not in enabled drivers build config 00:02:05.565 compress/isal: not in enabled drivers build config 00:02:05.565 compress/mlx5: not in enabled drivers build config 00:02:05.565 compress/nitrox: not in enabled drivers build config 00:02:05.565 compress/octeontx: not in enabled drivers build config 00:02:05.565 compress/zlib: not in enabled drivers build config 00:02:05.565 regex/*: missing internal dependency, "regexdev" 00:02:05.565 ml/*: missing internal dependency, "mldev" 00:02:05.565 vdpa/ifc: not in enabled drivers build config 00:02:05.565 vdpa/mlx5: not in enabled drivers build config 00:02:05.565 vdpa/nfp: not in enabled drivers build config 00:02:05.565 vdpa/sfc: not in enabled drivers build config 00:02:05.565 event/*: missing internal dependency, "eventdev" 00:02:05.565 baseband/*: missing internal dependency, "bbdev" 00:02:05.565 gpu/*: missing internal dependency, "gpudev" 00:02:05.565 00:02:05.565 00:02:06.133 Build targets in project: 85 00:02:06.133 00:02:06.133 DPDK 24.03.0 00:02:06.133 00:02:06.133 User defined options 00:02:06.133 buildtype : debug 00:02:06.133 default_library : shared 00:02:06.133 libdir : lib 00:02:06.133 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:06.133 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.133 c_link_args : 00:02:06.133 cpu_instruction_set: native 00:02:06.133 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:06.133 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:06.133 enable_docs : false 00:02:06.133 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:06.133 enable_kmods : false 00:02:06.133 max_lcores : 128 00:02:06.133 tests : false 00:02:06.133 00:02:06.133 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.404 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:06.404 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.404 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.404 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.666 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.666 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.666 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.666 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.666 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.666 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.666 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.666 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.666 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.666 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.666 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:06.666 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.666 [16/268] Linking static target lib/librte_kvargs.a 00:02:06.666 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:06.666 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.666 [19/268] Linking static target lib/librte_log.a 00:02:06.927 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:06.927 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.927 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:06.927 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:06.927 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:06.927 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:06.927 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.927 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.927 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.927 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.927 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.927 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.927 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:06.927 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.927 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.927 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.927 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:06.927 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:06.927 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.927 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.927 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.927 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.927 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.927 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.195 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.195 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.195 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.195 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.195 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.195 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.195 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.195 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.195 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.195 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.195 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.195 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.195 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.195 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.195 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.195 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.195 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.195 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.195 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.195 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.195 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.195 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.195 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.195 [67/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.195 [68/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.195 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.195 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.195 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.195 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.195 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.195 [74/268] Linking static target lib/librte_telemetry.a 00:02:07.195 [75/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:07.195 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.195 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.195 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.195 [79/268] Linking static target lib/librte_pci.a 00:02:07.195 [80/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.195 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.195 [82/268] Linking static target lib/librte_ring.a 00:02:07.195 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.195 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.195 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.195 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.195 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:07.195 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.195 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.195 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.195 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.195 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.195 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.195 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.195 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.195 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.195 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.195 [98/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.195 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.195 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.195 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.195 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.195 [103/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.195 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.454 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.454 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.454 [107/268] Linking static target lib/librte_rcu.a 00:02:07.454 [108/268] Linking static target lib/librte_mempool.a 00:02:07.454 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.454 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.454 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.454 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.454 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.454 [114/268] Linking static target lib/librte_eal.a 00:02:07.454 [115/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.454 [116/268] Linking static target lib/librte_net.a 00:02:07.454 [117/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.454 [118/268] Linking static target lib/librte_meter.a 00:02:07.454 [119/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.454 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.714 [121/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.714 [122/268] Linking static target lib/librte_mbuf.a 00:02:07.714 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.714 [124/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.714 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.714 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:07.714 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.714 [128/268] Linking static target lib/librte_timer.a 00:02:07.714 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.714 [130/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.714 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.714 [132/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.714 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.714 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.714 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.714 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.714 [137/268] Linking static target lib/librte_cmdline.a 00:02:07.714 [138/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.714 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.714 [140/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.714 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.714 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.714 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.714 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:07.714 [145/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.714 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.714 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.714 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.714 [149/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.714 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.714 [151/268] Linking target lib/librte_log.so.24.1 00:02:07.714 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.714 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:07.714 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.714 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.714 [156/268] Linking static target lib/librte_compressdev.a 00:02:07.714 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.714 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:07.714 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.714 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.714 [161/268] Linking static target lib/librte_dmadev.a 00:02:07.714 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.714 [163/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.714 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:07.973 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:07.973 [166/268] Linking static target lib/librte_power.a 00:02:07.973 [167/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.973 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:07.973 [169/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.973 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:07.973 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:07.973 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.973 [173/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.973 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:07.973 [175/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.973 [176/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.973 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:07.973 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:07.973 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.973 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:07.973 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.973 [182/268] Linking static target lib/librte_reorder.a 00:02:07.973 [183/268] Linking static target lib/librte_security.a 00:02:07.973 [184/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.973 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.973 [186/268] Linking target lib/librte_telemetry.so.24.1 00:02:07.973 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:07.973 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:07.973 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:07.973 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:07.973 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.973 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.973 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:07.973 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.973 [195/268] Linking static target lib/librte_hash.a 00:02:07.973 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.973 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:07.973 [198/268] Linking static target drivers/librte_bus_vdev.a 00:02:07.973 [199/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.232 [200/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.232 [201/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.232 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.232 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.232 [204/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.232 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.232 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.232 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.232 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:08.232 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.232 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.232 [211/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.232 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.232 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:08.232 [214/268] Linking static target lib/librte_cryptodev.a 00:02:08.232 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.491 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.491 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.491 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.491 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.491 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.749 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.749 [222/268] Linking static target lib/librte_ethdev.a 00:02:08.749 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.749 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.008 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.008 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.008 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.946 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.946 [229/268] Linking static target lib/librte_vhost.a 00:02:10.204 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.111 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.374 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.277 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.277 [234/268] Linking target lib/librte_eal.so.24.1 00:02:19.277 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.277 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:19.277 [237/268] Linking target lib/librte_pci.so.24.1 00:02:19.277 [238/268] Linking target lib/librte_ring.so.24.1 00:02:19.277 [239/268] Linking target lib/librte_meter.so.24.1 00:02:19.277 [240/268] Linking target lib/librte_timer.so.24.1 00:02:19.277 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.535 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.535 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.535 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.535 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.535 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.535 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.535 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:19.535 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:19.535 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.794 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.794 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:19.794 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.794 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:19.794 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:19.794 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.794 [257/268] Linking target lib/librte_net.so.24.1 00:02:19.794 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:20.052 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.052 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.052 [261/268] Linking target lib/librte_hash.so.24.1 00:02:20.052 [262/268] Linking target lib/librte_security.so.24.1 00:02:20.052 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.052 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:20.311 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.311 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:20.311 [267/268] Linking target lib/librte_power.so.24.1 00:02:20.311 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:20.311 INFO: autodetecting backend as ninja 00:02:20.311 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:30.476 CC lib/ut/ut.o 00:02:30.476 CC lib/log/log.o 00:02:30.476 CC lib/log/log_flags.o 00:02:30.476 CC lib/ut_mock/mock.o 00:02:30.476 CC lib/log/log_deprecated.o 00:02:30.476 LIB libspdk_ut.a 00:02:30.476 LIB libspdk_ut_mock.a 00:02:30.476 LIB libspdk_log.a 00:02:30.476 SO libspdk_ut_mock.so.6.0 00:02:30.476 SO libspdk_ut.so.2.0 00:02:30.476 SO libspdk_log.so.7.1 00:02:30.476 SYMLINK libspdk_ut_mock.so 00:02:30.476 SYMLINK libspdk_ut.so 00:02:30.476 SYMLINK libspdk_log.so 00:02:30.476 CXX lib/trace_parser/trace.o 00:02:30.476 CC lib/dma/dma.o 00:02:30.476 CC lib/ioat/ioat.o 00:02:30.476 CC lib/util/bit_array.o 00:02:30.476 CC lib/util/base64.o 00:02:30.476 CC lib/util/cpuset.o 00:02:30.476 CC lib/util/crc32.o 00:02:30.476 CC lib/util/crc16.o 00:02:30.476 CC lib/util/crc32c.o 00:02:30.476 CC lib/util/crc32_ieee.o 00:02:30.476 CC lib/util/crc64.o 00:02:30.476 CC lib/util/dif.o 00:02:30.476 CC lib/util/fd.o 00:02:30.476 CC lib/util/fd_group.o 00:02:30.476 CC lib/util/file.o 00:02:30.476 CC lib/util/math.o 00:02:30.476 CC lib/util/hexlify.o 00:02:30.476 CC lib/util/iov.o 00:02:30.476 CC lib/util/net.o 00:02:30.476 CC lib/util/pipe.o 00:02:30.476 CC lib/util/xor.o 00:02:30.476 CC lib/util/strerror_tls.o 00:02:30.476 CC lib/util/string.o 00:02:30.476 CC lib/util/uuid.o 00:02:30.476 CC lib/util/zipf.o 00:02:30.476 CC lib/util/md5.o 00:02:30.476 CC lib/vfio_user/host/vfio_user_pci.o 00:02:30.476 CC lib/vfio_user/host/vfio_user.o 00:02:30.476 LIB libspdk_dma.a 00:02:30.476 SO libspdk_dma.so.5.0 00:02:30.476 LIB libspdk_ioat.a 00:02:30.476 SO libspdk_ioat.so.7.0 00:02:30.476 SYMLINK libspdk_dma.so 00:02:30.735 SYMLINK libspdk_ioat.so 00:02:30.735 LIB libspdk_vfio_user.a 00:02:30.735 SO libspdk_vfio_user.so.5.0 00:02:30.735 SYMLINK libspdk_vfio_user.so 00:02:30.735 LIB libspdk_util.a 00:02:30.735 SO libspdk_util.so.10.1 00:02:30.995 SYMLINK libspdk_util.so 00:02:30.995 LIB libspdk_trace_parser.a 00:02:30.995 SO libspdk_trace_parser.so.6.0 00:02:31.253 SYMLINK libspdk_trace_parser.so 00:02:31.253 CC lib/conf/conf.o 00:02:31.253 CC lib/vmd/led.o 00:02:31.253 CC lib/vmd/vmd.o 00:02:31.253 CC lib/json/json_parse.o 00:02:31.253 CC lib/env_dpdk/env.o 00:02:31.253 CC lib/rdma_utils/rdma_utils.o 00:02:31.253 CC lib/json/json_util.o 00:02:31.253 CC lib/env_dpdk/memory.o 00:02:31.253 CC lib/json/json_write.o 00:02:31.253 CC lib/env_dpdk/pci.o 00:02:31.253 CC lib/env_dpdk/threads.o 00:02:31.253 CC lib/env_dpdk/init.o 00:02:31.253 CC lib/env_dpdk/pci_ioat.o 00:02:31.253 CC lib/idxd/idxd.o 00:02:31.253 CC lib/env_dpdk/pci_idxd.o 00:02:31.253 CC lib/env_dpdk/pci_virtio.o 00:02:31.253 CC lib/idxd/idxd_user.o 00:02:31.253 CC lib/env_dpdk/pci_event.o 00:02:31.253 CC lib/env_dpdk/pci_vmd.o 00:02:31.253 CC lib/idxd/idxd_kernel.o 00:02:31.253 CC lib/env_dpdk/sigbus_handler.o 00:02:31.253 CC lib/env_dpdk/pci_dpdk.o 00:02:31.253 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:31.253 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.513 LIB libspdk_conf.a 00:02:31.513 SO libspdk_conf.so.6.0 00:02:31.513 LIB libspdk_rdma_utils.a 00:02:31.513 SO libspdk_rdma_utils.so.1.0 00:02:31.513 SYMLINK libspdk_conf.so 00:02:31.513 LIB libspdk_json.a 00:02:31.513 SO libspdk_json.so.6.0 00:02:31.772 SYMLINK libspdk_rdma_utils.so 00:02:31.772 SYMLINK libspdk_json.so 00:02:31.772 LIB libspdk_idxd.a 00:02:31.772 SO libspdk_idxd.so.12.1 00:02:31.772 LIB libspdk_vmd.a 00:02:31.772 SO libspdk_vmd.so.6.0 00:02:32.030 SYMLINK libspdk_idxd.so 00:02:32.030 SYMLINK libspdk_vmd.so 00:02:32.031 CC lib/rdma_provider/common.o 00:02:32.031 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.031 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.031 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.031 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.031 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.397 LIB libspdk_rdma_provider.a 00:02:32.397 SO libspdk_rdma_provider.so.7.0 00:02:32.397 LIB libspdk_jsonrpc.a 00:02:32.397 SYMLINK libspdk_rdma_provider.so 00:02:32.397 SO libspdk_jsonrpc.so.6.0 00:02:32.397 SYMLINK libspdk_jsonrpc.so 00:02:32.397 LIB libspdk_env_dpdk.a 00:02:32.397 SO libspdk_env_dpdk.so.15.1 00:02:32.659 SYMLINK libspdk_env_dpdk.so 00:02:32.659 CC lib/rpc/rpc.o 00:02:32.916 LIB libspdk_rpc.a 00:02:32.916 SO libspdk_rpc.so.6.0 00:02:32.916 SYMLINK libspdk_rpc.so 00:02:33.173 CC lib/keyring/keyring.o 00:02:33.173 CC lib/keyring/keyring_rpc.o 00:02:33.431 CC lib/trace/trace.o 00:02:33.431 CC lib/trace/trace_flags.o 00:02:33.431 CC lib/trace/trace_rpc.o 00:02:33.431 CC lib/notify/notify.o 00:02:33.431 CC lib/notify/notify_rpc.o 00:02:33.431 LIB libspdk_keyring.a 00:02:33.431 SO libspdk_keyring.so.2.0 00:02:33.431 LIB libspdk_notify.a 00:02:33.690 SO libspdk_notify.so.6.0 00:02:33.690 SYMLINK libspdk_keyring.so 00:02:33.690 LIB libspdk_trace.a 00:02:33.690 SO libspdk_trace.so.11.0 00:02:33.690 SYMLINK libspdk_notify.so 00:02:33.690 SYMLINK libspdk_trace.so 00:02:33.949 CC lib/sock/sock.o 00:02:33.949 CC lib/sock/sock_rpc.o 00:02:34.208 CC lib/thread/thread.o 00:02:34.208 CC lib/thread/iobuf.o 00:02:34.467 LIB libspdk_sock.a 00:02:34.467 SO libspdk_sock.so.10.0 00:02:34.467 SYMLINK libspdk_sock.so 00:02:34.726 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:34.726 CC lib/nvme/nvme_ctrlr.o 00:02:34.726 CC lib/nvme/nvme_fabric.o 00:02:34.726 CC lib/nvme/nvme_pcie.o 00:02:34.726 CC lib/nvme/nvme_ns_cmd.o 00:02:34.726 CC lib/nvme/nvme_ns.o 00:02:34.726 CC lib/nvme/nvme_pcie_common.o 00:02:34.726 CC lib/nvme/nvme_qpair.o 00:02:34.726 CC lib/nvme/nvme.o 00:02:34.726 CC lib/nvme/nvme_quirks.o 00:02:34.726 CC lib/nvme/nvme_transport.o 00:02:34.726 CC lib/nvme/nvme_discovery.o 00:02:34.726 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:34.726 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:34.726 CC lib/nvme/nvme_poll_group.o 00:02:34.726 CC lib/nvme/nvme_tcp.o 00:02:34.726 CC lib/nvme/nvme_opal.o 00:02:34.726 CC lib/nvme/nvme_io_msg.o 00:02:34.726 CC lib/nvme/nvme_zns.o 00:02:34.726 CC lib/nvme/nvme_stubs.o 00:02:34.726 CC lib/nvme/nvme_auth.o 00:02:34.726 CC lib/nvme/nvme_cuse.o 00:02:34.726 CC lib/nvme/nvme_rdma.o 00:02:35.293 LIB libspdk_thread.a 00:02:35.293 SO libspdk_thread.so.11.0 00:02:35.293 SYMLINK libspdk_thread.so 00:02:35.552 CC lib/blob/request.o 00:02:35.552 CC lib/blob/blobstore.o 00:02:35.552 CC lib/virtio/virtio.o 00:02:35.552 CC lib/virtio/virtio_vhost_user.o 00:02:35.552 CC lib/blob/zeroes.o 00:02:35.552 CC lib/blob/blob_bs_dev.o 00:02:35.552 CC lib/virtio/virtio_vfio_user.o 00:02:35.552 CC lib/virtio/virtio_pci.o 00:02:35.552 CC lib/init/json_config.o 00:02:35.552 CC lib/init/subsystem_rpc.o 00:02:35.552 CC lib/init/subsystem.o 00:02:35.552 CC lib/accel/accel.o 00:02:35.552 CC lib/init/rpc.o 00:02:35.552 CC lib/accel/accel_rpc.o 00:02:35.552 CC lib/accel/accel_sw.o 00:02:35.552 CC lib/fsdev/fsdev_io.o 00:02:35.552 CC lib/fsdev/fsdev.o 00:02:35.552 CC lib/fsdev/fsdev_rpc.o 00:02:35.810 LIB libspdk_init.a 00:02:35.810 SO libspdk_init.so.6.0 00:02:35.810 LIB libspdk_virtio.a 00:02:36.068 SO libspdk_virtio.so.7.0 00:02:36.068 SYMLINK libspdk_init.so 00:02:36.068 SYMLINK libspdk_virtio.so 00:02:36.068 LIB libspdk_fsdev.a 00:02:36.326 SO libspdk_fsdev.so.2.0 00:02:36.326 SYMLINK libspdk_fsdev.so 00:02:36.326 CC lib/event/app.o 00:02:36.326 CC lib/event/log_rpc.o 00:02:36.326 CC lib/event/reactor.o 00:02:36.326 CC lib/event/scheduler_static.o 00:02:36.326 CC lib/event/app_rpc.o 00:02:36.583 LIB libspdk_accel.a 00:02:36.583 SO libspdk_accel.so.16.0 00:02:36.583 LIB libspdk_nvme.a 00:02:36.583 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:36.583 SYMLINK libspdk_accel.so 00:02:36.583 LIB libspdk_event.a 00:02:36.583 SO libspdk_nvme.so.15.0 00:02:36.583 SO libspdk_event.so.14.0 00:02:36.840 SYMLINK libspdk_event.so 00:02:36.840 SYMLINK libspdk_nvme.so 00:02:36.840 CC lib/bdev/bdev.o 00:02:36.840 CC lib/bdev/bdev_rpc.o 00:02:36.840 CC lib/bdev/scsi_nvme.o 00:02:36.841 CC lib/bdev/bdev_zone.o 00:02:36.841 CC lib/bdev/part.o 00:02:37.098 LIB libspdk_fuse_dispatcher.a 00:02:37.098 SO libspdk_fuse_dispatcher.so.1.0 00:02:37.098 SYMLINK libspdk_fuse_dispatcher.so 00:02:38.034 LIB libspdk_blob.a 00:02:38.034 SO libspdk_blob.so.11.0 00:02:38.034 SYMLINK libspdk_blob.so 00:02:38.293 CC lib/lvol/lvol.o 00:02:38.293 CC lib/blobfs/blobfs.o 00:02:38.293 CC lib/blobfs/tree.o 00:02:38.860 LIB libspdk_bdev.a 00:02:38.861 SO libspdk_bdev.so.17.0 00:02:38.861 SYMLINK libspdk_bdev.so 00:02:38.861 LIB libspdk_blobfs.a 00:02:39.119 SO libspdk_blobfs.so.10.0 00:02:39.119 LIB libspdk_lvol.a 00:02:39.119 SO libspdk_lvol.so.10.0 00:02:39.119 SYMLINK libspdk_blobfs.so 00:02:39.119 SYMLINK libspdk_lvol.so 00:02:39.387 CC lib/ublk/ublk.o 00:02:39.387 CC lib/ublk/ublk_rpc.o 00:02:39.387 CC lib/nbd/nbd.o 00:02:39.387 CC lib/nbd/nbd_rpc.o 00:02:39.387 CC lib/nvmf/ctrlr.o 00:02:39.387 CC lib/ftl/ftl_core.o 00:02:39.387 CC lib/scsi/dev.o 00:02:39.387 CC lib/nvmf/ctrlr_discovery.o 00:02:39.387 CC lib/nvmf/ctrlr_bdev.o 00:02:39.387 CC lib/ftl/ftl_init.o 00:02:39.387 CC lib/scsi/lun.o 00:02:39.387 CC lib/nvmf/subsystem.o 00:02:39.387 CC lib/ftl/ftl_io.o 00:02:39.387 CC lib/ftl/ftl_layout.o 00:02:39.387 CC lib/scsi/port.o 00:02:39.387 CC lib/ftl/ftl_debug.o 00:02:39.387 CC lib/ftl/ftl_sb.o 00:02:39.387 CC lib/scsi/scsi.o 00:02:39.387 CC lib/nvmf/nvmf.o 00:02:39.387 CC lib/scsi/scsi_bdev.o 00:02:39.387 CC lib/scsi/task.o 00:02:39.387 CC lib/scsi/scsi_pr.o 00:02:39.387 CC lib/nvmf/nvmf_rpc.o 00:02:39.387 CC lib/ftl/ftl_l2p.o 00:02:39.387 CC lib/scsi/scsi_rpc.o 00:02:39.387 CC lib/nvmf/transport.o 00:02:39.387 CC lib/ftl/ftl_l2p_flat.o 00:02:39.387 CC lib/ftl/ftl_band.o 00:02:39.387 CC lib/ftl/ftl_nv_cache.o 00:02:39.387 CC lib/nvmf/tcp.o 00:02:39.387 CC lib/nvmf/mdns_server.o 00:02:39.387 CC lib/nvmf/stubs.o 00:02:39.387 CC lib/ftl/ftl_band_ops.o 00:02:39.387 CC lib/ftl/ftl_writer.o 00:02:39.387 CC lib/nvmf/rdma.o 00:02:39.387 CC lib/ftl/ftl_rq.o 00:02:39.387 CC lib/nvmf/auth.o 00:02:39.387 CC lib/ftl/ftl_reloc.o 00:02:39.387 CC lib/ftl/ftl_p2l.o 00:02:39.387 CC lib/ftl/ftl_l2p_cache.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt.o 00:02:39.387 CC lib/ftl/ftl_p2l_log.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:39.387 CC lib/ftl/utils/ftl_conf.o 00:02:39.387 CC lib/ftl/utils/ftl_md.o 00:02:39.387 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:39.387 CC lib/ftl/utils/ftl_bitmap.o 00:02:39.387 CC lib/ftl/utils/ftl_mempool.o 00:02:39.387 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:39.387 CC lib/ftl/utils/ftl_property.o 00:02:39.387 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:39.387 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:39.387 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:39.387 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:39.387 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:39.387 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:39.387 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:39.645 CC lib/ftl/base/ftl_base_dev.o 00:02:39.645 CC lib/ftl/base/ftl_base_bdev.o 00:02:39.645 CC lib/ftl/ftl_trace.o 00:02:39.904 LIB libspdk_nbd.a 00:02:39.904 LIB libspdk_scsi.a 00:02:40.163 SO libspdk_nbd.so.7.0 00:02:40.163 LIB libspdk_ublk.a 00:02:40.163 SO libspdk_scsi.so.9.0 00:02:40.163 SO libspdk_ublk.so.3.0 00:02:40.163 SYMLINK libspdk_nbd.so 00:02:40.163 SYMLINK libspdk_scsi.so 00:02:40.163 SYMLINK libspdk_ublk.so 00:02:40.421 CC lib/vhost/vhost_blk.o 00:02:40.421 CC lib/vhost/vhost.o 00:02:40.421 CC lib/vhost/vhost_rpc.o 00:02:40.421 CC lib/vhost/vhost_scsi.o 00:02:40.421 CC lib/vhost/rte_vhost_user.o 00:02:40.421 LIB libspdk_ftl.a 00:02:40.421 CC lib/iscsi/conn.o 00:02:40.421 CC lib/iscsi/init_grp.o 00:02:40.421 CC lib/iscsi/portal_grp.o 00:02:40.421 CC lib/iscsi/iscsi.o 00:02:40.421 CC lib/iscsi/param.o 00:02:40.421 CC lib/iscsi/iscsi_subsystem.o 00:02:40.421 CC lib/iscsi/tgt_node.o 00:02:40.421 CC lib/iscsi/iscsi_rpc.o 00:02:40.421 CC lib/iscsi/task.o 00:02:40.679 SO libspdk_ftl.so.9.0 00:02:40.937 SYMLINK libspdk_ftl.so 00:02:41.195 LIB libspdk_nvmf.a 00:02:41.195 LIB libspdk_vhost.a 00:02:41.195 SO libspdk_nvmf.so.20.0 00:02:41.195 SO libspdk_vhost.so.8.0 00:02:41.454 SYMLINK libspdk_vhost.so 00:02:41.454 SYMLINK libspdk_nvmf.so 00:02:41.454 LIB libspdk_iscsi.a 00:02:41.712 SO libspdk_iscsi.so.8.0 00:02:41.712 SYMLINK libspdk_iscsi.so 00:02:42.280 CC module/env_dpdk/env_dpdk_rpc.o 00:02:42.280 CC module/keyring/linux/keyring_rpc.o 00:02:42.280 CC module/blob/bdev/blob_bdev.o 00:02:42.280 CC module/keyring/linux/keyring.o 00:02:42.280 LIB libspdk_env_dpdk_rpc.a 00:02:42.280 CC module/scheduler/gscheduler/gscheduler.o 00:02:42.280 CC module/keyring/file/keyring.o 00:02:42.280 CC module/keyring/file/keyring_rpc.o 00:02:42.280 CC module/accel/iaa/accel_iaa_rpc.o 00:02:42.280 CC module/accel/iaa/accel_iaa.o 00:02:42.280 CC module/accel/ioat/accel_ioat_rpc.o 00:02:42.280 CC module/accel/dsa/accel_dsa.o 00:02:42.280 CC module/accel/dsa/accel_dsa_rpc.o 00:02:42.280 CC module/accel/ioat/accel_ioat.o 00:02:42.280 CC module/sock/posix/posix.o 00:02:42.280 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:42.280 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:42.280 CC module/fsdev/aio/linux_aio_mgr.o 00:02:42.280 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:42.280 CC module/fsdev/aio/fsdev_aio.o 00:02:42.538 CC module/accel/error/accel_error.o 00:02:42.538 SO libspdk_env_dpdk_rpc.so.6.0 00:02:42.538 CC module/accel/error/accel_error_rpc.o 00:02:42.538 SYMLINK libspdk_env_dpdk_rpc.so 00:02:42.538 LIB libspdk_keyring_linux.a 00:02:42.538 LIB libspdk_scheduler_gscheduler.a 00:02:42.538 LIB libspdk_keyring_file.a 00:02:42.538 SO libspdk_keyring_linux.so.1.0 00:02:42.538 LIB libspdk_scheduler_dpdk_governor.a 00:02:42.538 SO libspdk_scheduler_gscheduler.so.4.0 00:02:42.538 SO libspdk_keyring_file.so.2.0 00:02:42.538 LIB libspdk_accel_ioat.a 00:02:42.538 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:42.538 LIB libspdk_scheduler_dynamic.a 00:02:42.538 LIB libspdk_accel_iaa.a 00:02:42.538 SYMLINK libspdk_keyring_linux.so 00:02:42.538 SYMLINK libspdk_scheduler_gscheduler.so 00:02:42.538 LIB libspdk_accel_error.a 00:02:42.538 SO libspdk_accel_ioat.so.6.0 00:02:42.538 SO libspdk_scheduler_dynamic.so.4.0 00:02:42.538 SYMLINK libspdk_keyring_file.so 00:02:42.538 SO libspdk_accel_iaa.so.3.0 00:02:42.538 LIB libspdk_blob_bdev.a 00:02:42.795 LIB libspdk_accel_dsa.a 00:02:42.795 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:42.795 SO libspdk_accel_error.so.2.0 00:02:42.795 SO libspdk_blob_bdev.so.11.0 00:02:42.795 SYMLINK libspdk_scheduler_dynamic.so 00:02:42.795 SYMLINK libspdk_accel_ioat.so 00:02:42.795 SO libspdk_accel_dsa.so.5.0 00:02:42.795 SYMLINK libspdk_accel_iaa.so 00:02:42.795 SYMLINK libspdk_accel_error.so 00:02:42.795 SYMLINK libspdk_blob_bdev.so 00:02:42.795 SYMLINK libspdk_accel_dsa.so 00:02:43.054 LIB libspdk_fsdev_aio.a 00:02:43.054 SO libspdk_fsdev_aio.so.1.0 00:02:43.054 LIB libspdk_sock_posix.a 00:02:43.054 SO libspdk_sock_posix.so.6.0 00:02:43.054 SYMLINK libspdk_fsdev_aio.so 00:02:43.054 SYMLINK libspdk_sock_posix.so 00:02:43.312 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:43.312 CC module/bdev/lvol/vbdev_lvol.o 00:02:43.312 CC module/bdev/error/vbdev_error.o 00:02:43.312 CC module/bdev/error/vbdev_error_rpc.o 00:02:43.312 CC module/bdev/gpt/gpt.o 00:02:43.312 CC module/bdev/gpt/vbdev_gpt.o 00:02:43.312 CC module/bdev/null/bdev_null.o 00:02:43.312 CC module/bdev/null/bdev_null_rpc.o 00:02:43.312 CC module/bdev/passthru/vbdev_passthru.o 00:02:43.312 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:43.312 CC module/bdev/delay/vbdev_delay.o 00:02:43.312 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:43.312 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:43.312 CC module/bdev/malloc/bdev_malloc.o 00:02:43.312 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:43.312 CC module/bdev/aio/bdev_aio.o 00:02:43.312 CC module/bdev/nvme/nvme_rpc.o 00:02:43.312 CC module/bdev/nvme/bdev_nvme.o 00:02:43.312 CC module/bdev/iscsi/bdev_iscsi.o 00:02:43.312 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:43.312 CC module/bdev/aio/bdev_aio_rpc.o 00:02:43.312 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:43.312 CC module/bdev/nvme/bdev_mdns_client.o 00:02:43.312 CC module/bdev/nvme/vbdev_opal.o 00:02:43.312 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:43.312 CC module/bdev/raid/bdev_raid.o 00:02:43.312 CC module/bdev/raid/bdev_raid_sb.o 00:02:43.312 CC module/bdev/raid/bdev_raid_rpc.o 00:02:43.312 CC module/bdev/raid/raid0.o 00:02:43.312 CC module/bdev/raid/concat.o 00:02:43.312 CC module/bdev/raid/raid1.o 00:02:43.312 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:43.312 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:43.312 CC module/bdev/split/vbdev_split.o 00:02:43.312 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:43.312 CC module/bdev/split/vbdev_split_rpc.o 00:02:43.312 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:43.312 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:43.312 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:43.312 CC module/blobfs/bdev/blobfs_bdev.o 00:02:43.312 CC module/bdev/ftl/bdev_ftl.o 00:02:43.312 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:43.571 LIB libspdk_bdev_null.a 00:02:43.571 LIB libspdk_bdev_error.a 00:02:43.571 LIB libspdk_bdev_gpt.a 00:02:43.571 SO libspdk_bdev_null.so.6.0 00:02:43.571 SO libspdk_bdev_error.so.6.0 00:02:43.571 LIB libspdk_blobfs_bdev.a 00:02:43.571 SO libspdk_bdev_gpt.so.6.0 00:02:43.571 LIB libspdk_bdev_aio.a 00:02:43.571 SO libspdk_blobfs_bdev.so.6.0 00:02:43.571 LIB libspdk_bdev_ftl.a 00:02:43.571 SYMLINK libspdk_bdev_error.so 00:02:43.571 LIB libspdk_bdev_zone_block.a 00:02:43.571 LIB libspdk_bdev_malloc.a 00:02:43.571 SO libspdk_bdev_aio.so.6.0 00:02:43.571 LIB libspdk_bdev_split.a 00:02:43.571 SYMLINK libspdk_bdev_null.so 00:02:43.571 LIB libspdk_bdev_iscsi.a 00:02:43.571 LIB libspdk_bdev_delay.a 00:02:43.571 SO libspdk_bdev_malloc.so.6.0 00:02:43.571 SO libspdk_bdev_ftl.so.6.0 00:02:43.571 SYMLINK libspdk_bdev_gpt.so 00:02:43.571 SYMLINK libspdk_blobfs_bdev.so 00:02:43.571 SO libspdk_bdev_zone_block.so.6.0 00:02:43.571 SO libspdk_bdev_iscsi.so.6.0 00:02:43.571 SO libspdk_bdev_split.so.6.0 00:02:43.571 SYMLINK libspdk_bdev_aio.so 00:02:43.571 SO libspdk_bdev_delay.so.6.0 00:02:43.571 LIB libspdk_bdev_passthru.a 00:02:43.831 SO libspdk_bdev_passthru.so.6.0 00:02:43.831 SYMLINK libspdk_bdev_malloc.so 00:02:43.831 SYMLINK libspdk_bdev_ftl.so 00:02:43.831 SYMLINK libspdk_bdev_iscsi.so 00:02:43.831 SYMLINK libspdk_bdev_split.so 00:02:43.831 LIB libspdk_bdev_lvol.a 00:02:43.831 SYMLINK libspdk_bdev_zone_block.so 00:02:43.831 SYMLINK libspdk_bdev_delay.so 00:02:43.831 LIB libspdk_bdev_virtio.a 00:02:43.831 SO libspdk_bdev_lvol.so.6.0 00:02:43.831 SYMLINK libspdk_bdev_passthru.so 00:02:43.831 SO libspdk_bdev_virtio.so.6.0 00:02:43.831 SYMLINK libspdk_bdev_lvol.so 00:02:43.831 SYMLINK libspdk_bdev_virtio.so 00:02:44.090 LIB libspdk_bdev_raid.a 00:02:44.090 SO libspdk_bdev_raid.so.6.0 00:02:44.349 SYMLINK libspdk_bdev_raid.so 00:02:45.288 LIB libspdk_bdev_nvme.a 00:02:45.288 SO libspdk_bdev_nvme.so.7.1 00:02:45.288 SYMLINK libspdk_bdev_nvme.so 00:02:45.857 CC module/event/subsystems/vmd/vmd.o 00:02:46.115 CC module/event/subsystems/fsdev/fsdev.o 00:02:46.115 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:46.115 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:46.115 CC module/event/subsystems/keyring/keyring.o 00:02:46.115 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.115 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:46.115 CC module/event/subsystems/iobuf/iobuf.o 00:02:46.115 CC module/event/subsystems/sock/sock.o 00:02:46.115 LIB libspdk_event_keyring.a 00:02:46.115 LIB libspdk_event_fsdev.a 00:02:46.115 LIB libspdk_event_vmd.a 00:02:46.115 LIB libspdk_event_scheduler.a 00:02:46.115 LIB libspdk_event_vhost_blk.a 00:02:46.115 SO libspdk_event_fsdev.so.1.0 00:02:46.115 SO libspdk_event_keyring.so.1.0 00:02:46.115 LIB libspdk_event_iobuf.a 00:02:46.115 LIB libspdk_event_sock.a 00:02:46.115 SO libspdk_event_vmd.so.6.0 00:02:46.115 SO libspdk_event_vhost_blk.so.3.0 00:02:46.115 SO libspdk_event_scheduler.so.4.0 00:02:46.115 SO libspdk_event_sock.so.5.0 00:02:46.115 SO libspdk_event_iobuf.so.3.0 00:02:46.115 SYMLINK libspdk_event_keyring.so 00:02:46.115 SYMLINK libspdk_event_fsdev.so 00:02:46.115 SYMLINK libspdk_event_scheduler.so 00:02:46.374 SYMLINK libspdk_event_vmd.so 00:02:46.374 SYMLINK libspdk_event_vhost_blk.so 00:02:46.374 SYMLINK libspdk_event_sock.so 00:02:46.374 SYMLINK libspdk_event_iobuf.so 00:02:46.633 CC module/event/subsystems/accel/accel.o 00:02:46.893 LIB libspdk_event_accel.a 00:02:46.893 SO libspdk_event_accel.so.6.0 00:02:46.893 SYMLINK libspdk_event_accel.so 00:02:47.152 CC module/event/subsystems/bdev/bdev.o 00:02:47.411 LIB libspdk_event_bdev.a 00:02:47.411 SO libspdk_event_bdev.so.6.0 00:02:47.411 SYMLINK libspdk_event_bdev.so 00:02:47.671 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:47.930 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:47.930 CC module/event/subsystems/scsi/scsi.o 00:02:47.930 CC module/event/subsystems/nbd/nbd.o 00:02:47.930 CC module/event/subsystems/ublk/ublk.o 00:02:47.930 LIB libspdk_event_ublk.a 00:02:47.930 LIB libspdk_event_nbd.a 00:02:47.930 LIB libspdk_event_scsi.a 00:02:47.930 SO libspdk_event_ublk.so.3.0 00:02:47.930 SO libspdk_event_nbd.so.6.0 00:02:47.930 SO libspdk_event_scsi.so.6.0 00:02:47.930 LIB libspdk_event_nvmf.a 00:02:47.930 SO libspdk_event_nvmf.so.6.0 00:02:47.930 SYMLINK libspdk_event_ublk.so 00:02:47.930 SYMLINK libspdk_event_nbd.so 00:02:48.189 SYMLINK libspdk_event_scsi.so 00:02:48.189 SYMLINK libspdk_event_nvmf.so 00:02:48.449 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.449 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.449 LIB libspdk_event_vhost_scsi.a 00:02:48.449 LIB libspdk_event_iscsi.a 00:02:48.449 SO libspdk_event_vhost_scsi.so.3.0 00:02:48.709 SO libspdk_event_iscsi.so.6.0 00:02:48.709 SYMLINK libspdk_event_vhost_scsi.so 00:02:48.709 SYMLINK libspdk_event_iscsi.so 00:02:48.968 SO libspdk.so.6.0 00:02:48.968 SYMLINK libspdk.so 00:02:49.235 CC app/spdk_nvme_discover/discovery_aer.o 00:02:49.235 CXX app/trace/trace.o 00:02:49.235 CC app/spdk_top/spdk_top.o 00:02:49.235 CC app/spdk_nvme_identify/identify.o 00:02:49.235 CC app/spdk_nvme_perf/perf.o 00:02:49.235 TEST_HEADER include/spdk/accel.h 00:02:49.235 CC app/trace_record/trace_record.o 00:02:49.235 TEST_HEADER include/spdk/accel_module.h 00:02:49.235 TEST_HEADER include/spdk/assert.h 00:02:49.235 TEST_HEADER include/spdk/base64.h 00:02:49.235 TEST_HEADER include/spdk/bdev.h 00:02:49.235 TEST_HEADER include/spdk/bdev_module.h 00:02:49.235 TEST_HEADER include/spdk/barrier.h 00:02:49.235 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.235 CC test/rpc_client/rpc_client_test.o 00:02:49.235 TEST_HEADER include/spdk/bit_pool.h 00:02:49.235 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.235 TEST_HEADER include/spdk/blobfs.h 00:02:49.235 TEST_HEADER include/spdk/blob.h 00:02:49.235 TEST_HEADER include/spdk/bit_array.h 00:02:49.235 TEST_HEADER include/spdk/conf.h 00:02:49.235 TEST_HEADER include/spdk/config.h 00:02:49.235 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.235 TEST_HEADER include/spdk/cpuset.h 00:02:49.235 TEST_HEADER include/spdk/crc32.h 00:02:49.235 TEST_HEADER include/spdk/crc64.h 00:02:49.235 CC app/spdk_lspci/spdk_lspci.o 00:02:49.235 TEST_HEADER include/spdk/crc16.h 00:02:49.235 TEST_HEADER include/spdk/dma.h 00:02:49.235 TEST_HEADER include/spdk/endian.h 00:02:49.235 TEST_HEADER include/spdk/dif.h 00:02:49.235 TEST_HEADER include/spdk/env.h 00:02:49.235 TEST_HEADER include/spdk/event.h 00:02:49.235 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.235 TEST_HEADER include/spdk/fd_group.h 00:02:49.235 TEST_HEADER include/spdk/fsdev.h 00:02:49.235 TEST_HEADER include/spdk/fd.h 00:02:49.235 TEST_HEADER include/spdk/fsdev_module.h 00:02:49.235 TEST_HEADER include/spdk/ftl.h 00:02:49.235 TEST_HEADER include/spdk/file.h 00:02:49.235 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.235 TEST_HEADER include/spdk/hexlify.h 00:02:49.235 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:49.235 TEST_HEADER include/spdk/histogram_data.h 00:02:49.235 TEST_HEADER include/spdk/idxd.h 00:02:49.235 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.235 TEST_HEADER include/spdk/init.h 00:02:49.235 TEST_HEADER include/spdk/ioat.h 00:02:49.235 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.235 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.235 TEST_HEADER include/spdk/json.h 00:02:49.235 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.235 CC app/nvmf_tgt/nvmf_main.o 00:02:49.235 TEST_HEADER include/spdk/keyring.h 00:02:49.235 TEST_HEADER include/spdk/likely.h 00:02:49.235 TEST_HEADER include/spdk/keyring_module.h 00:02:49.235 TEST_HEADER include/spdk/log.h 00:02:49.235 TEST_HEADER include/spdk/lvol.h 00:02:49.235 TEST_HEADER include/spdk/md5.h 00:02:49.235 TEST_HEADER include/spdk/memory.h 00:02:49.235 TEST_HEADER include/spdk/mmio.h 00:02:49.235 TEST_HEADER include/spdk/nbd.h 00:02:49.235 TEST_HEADER include/spdk/net.h 00:02:49.235 TEST_HEADER include/spdk/notify.h 00:02:49.235 TEST_HEADER include/spdk/nvme.h 00:02:49.235 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.235 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.235 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.235 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.235 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.235 CC app/spdk_dd/spdk_dd.o 00:02:49.235 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.235 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.235 TEST_HEADER include/spdk/nvmf.h 00:02:49.235 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.235 TEST_HEADER include/spdk/opal.h 00:02:49.235 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.235 TEST_HEADER include/spdk/opal_spec.h 00:02:49.235 TEST_HEADER include/spdk/pci_ids.h 00:02:49.235 TEST_HEADER include/spdk/pipe.h 00:02:49.235 TEST_HEADER include/spdk/reduce.h 00:02:49.235 TEST_HEADER include/spdk/queue.h 00:02:49.235 TEST_HEADER include/spdk/rpc.h 00:02:49.235 TEST_HEADER include/spdk/scheduler.h 00:02:49.235 TEST_HEADER include/spdk/scsi.h 00:02:49.235 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.235 TEST_HEADER include/spdk/sock.h 00:02:49.235 TEST_HEADER include/spdk/stdinc.h 00:02:49.235 TEST_HEADER include/spdk/string.h 00:02:49.235 TEST_HEADER include/spdk/thread.h 00:02:49.235 TEST_HEADER include/spdk/trace.h 00:02:49.235 TEST_HEADER include/spdk/trace_parser.h 00:02:49.235 TEST_HEADER include/spdk/tree.h 00:02:49.235 TEST_HEADER include/spdk/ublk.h 00:02:49.235 TEST_HEADER include/spdk/util.h 00:02:49.235 TEST_HEADER include/spdk/uuid.h 00:02:49.235 TEST_HEADER include/spdk/version.h 00:02:49.235 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.235 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.235 TEST_HEADER include/spdk/vhost.h 00:02:49.235 TEST_HEADER include/spdk/vmd.h 00:02:49.235 TEST_HEADER include/spdk/zipf.h 00:02:49.235 TEST_HEADER include/spdk/xor.h 00:02:49.235 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.235 CXX test/cpp_headers/accel.o 00:02:49.235 CXX test/cpp_headers/accel_module.o 00:02:49.235 CXX test/cpp_headers/assert.o 00:02:49.235 CXX test/cpp_headers/barrier.o 00:02:49.235 CXX test/cpp_headers/base64.o 00:02:49.235 CXX test/cpp_headers/bdev_module.o 00:02:49.235 CXX test/cpp_headers/bdev.o 00:02:49.235 CXX test/cpp_headers/bdev_zone.o 00:02:49.235 CXX test/cpp_headers/blob_bdev.o 00:02:49.235 CXX test/cpp_headers/bit_array.o 00:02:49.235 CXX test/cpp_headers/bit_pool.o 00:02:49.235 CXX test/cpp_headers/blobfs_bdev.o 00:02:49.235 CXX test/cpp_headers/blobfs.o 00:02:49.235 CXX test/cpp_headers/blob.o 00:02:49.235 CXX test/cpp_headers/conf.o 00:02:49.235 CXX test/cpp_headers/config.o 00:02:49.235 CXX test/cpp_headers/crc32.o 00:02:49.235 CXX test/cpp_headers/cpuset.o 00:02:49.235 CXX test/cpp_headers/crc16.o 00:02:49.235 CXX test/cpp_headers/crc64.o 00:02:49.235 CXX test/cpp_headers/dif.o 00:02:49.235 CXX test/cpp_headers/dma.o 00:02:49.235 CXX test/cpp_headers/endian.o 00:02:49.235 CXX test/cpp_headers/env.o 00:02:49.235 CXX test/cpp_headers/env_dpdk.o 00:02:49.235 CXX test/cpp_headers/event.o 00:02:49.235 CXX test/cpp_headers/fd_group.o 00:02:49.235 CXX test/cpp_headers/fd.o 00:02:49.235 CXX test/cpp_headers/file.o 00:02:49.235 CXX test/cpp_headers/fsdev.o 00:02:49.235 CXX test/cpp_headers/fsdev_module.o 00:02:49.235 CXX test/cpp_headers/ftl.o 00:02:49.235 CXX test/cpp_headers/gpt_spec.o 00:02:49.235 CXX test/cpp_headers/fuse_dispatcher.o 00:02:49.235 CXX test/cpp_headers/hexlify.o 00:02:49.235 CXX test/cpp_headers/histogram_data.o 00:02:49.235 CC app/spdk_tgt/spdk_tgt.o 00:02:49.235 CXX test/cpp_headers/idxd_spec.o 00:02:49.235 CXX test/cpp_headers/idxd.o 00:02:49.235 CXX test/cpp_headers/ioat.o 00:02:49.235 CXX test/cpp_headers/init.o 00:02:49.235 CXX test/cpp_headers/ioat_spec.o 00:02:49.235 CXX test/cpp_headers/iscsi_spec.o 00:02:49.235 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.235 CXX test/cpp_headers/json.o 00:02:49.235 CC test/app/jsoncat/jsoncat.o 00:02:49.235 CC test/app/histogram_perf/histogram_perf.o 00:02:49.235 CC test/env/pci/pci_ut.o 00:02:49.503 CC test/env/vtophys/vtophys.o 00:02:49.503 CC test/app/stub/stub.o 00:02:49.503 CC examples/ioat/verify/verify.o 00:02:49.503 CC test/env/memory/memory_ut.o 00:02:49.503 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:49.503 CC app/fio/nvme/fio_plugin.o 00:02:49.503 CC examples/util/zipf/zipf.o 00:02:49.503 CC examples/ioat/perf/perf.o 00:02:49.503 CC test/thread/poller_perf/poller_perf.o 00:02:49.503 CC test/dma/test_dma/test_dma.o 00:02:49.503 CC app/fio/bdev/fio_plugin.o 00:02:49.503 CC test/app/bdev_svc/bdev_svc.o 00:02:49.503 LINK spdk_nvme_discover 00:02:49.503 LINK rpc_client_test 00:02:49.503 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.503 LINK spdk_lspci 00:02:49.765 LINK jsoncat 00:02:49.765 LINK histogram_perf 00:02:49.765 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:49.765 LINK vtophys 00:02:49.765 LINK nvmf_tgt 00:02:49.765 LINK zipf 00:02:49.765 LINK stub 00:02:49.765 CXX test/cpp_headers/jsonrpc.o 00:02:49.765 CXX test/cpp_headers/keyring.o 00:02:49.765 LINK env_dpdk_post_init 00:02:49.765 CXX test/cpp_headers/keyring_module.o 00:02:49.765 CXX test/cpp_headers/likely.o 00:02:49.765 CXX test/cpp_headers/log.o 00:02:49.765 LINK spdk_trace_record 00:02:49.765 CXX test/cpp_headers/lvol.o 00:02:49.765 CXX test/cpp_headers/md5.o 00:02:49.765 CXX test/cpp_headers/memory.o 00:02:49.765 CXX test/cpp_headers/mmio.o 00:02:49.765 CXX test/cpp_headers/nbd.o 00:02:49.765 CXX test/cpp_headers/net.o 00:02:49.765 CXX test/cpp_headers/notify.o 00:02:49.765 CXX test/cpp_headers/nvme.o 00:02:49.765 LINK iscsi_tgt 00:02:49.765 LINK spdk_tgt 00:02:49.765 CXX test/cpp_headers/nvme_intel.o 00:02:49.765 LINK interrupt_tgt 00:02:49.765 LINK verify 00:02:49.765 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.765 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.765 LINK poller_perf 00:02:49.765 LINK bdev_svc 00:02:50.026 CXX test/cpp_headers/nvme_spec.o 00:02:50.026 CXX test/cpp_headers/nvme_zns.o 00:02:50.026 CXX test/cpp_headers/nvmf_cmd.o 00:02:50.026 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:50.026 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:50.026 CXX test/cpp_headers/nvmf.o 00:02:50.026 CXX test/cpp_headers/nvmf_spec.o 00:02:50.026 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:50.026 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:50.026 CXX test/cpp_headers/nvmf_transport.o 00:02:50.026 LINK spdk_trace 00:02:50.026 LINK spdk_dd 00:02:50.026 CXX test/cpp_headers/opal_spec.o 00:02:50.026 CXX test/cpp_headers/opal.o 00:02:50.026 CXX test/cpp_headers/pipe.o 00:02:50.026 CXX test/cpp_headers/pci_ids.o 00:02:50.026 CXX test/cpp_headers/queue.o 00:02:50.026 CXX test/cpp_headers/reduce.o 00:02:50.026 CXX test/cpp_headers/rpc.o 00:02:50.026 CXX test/cpp_headers/scheduler.o 00:02:50.026 CXX test/cpp_headers/scsi.o 00:02:50.026 CXX test/cpp_headers/scsi_spec.o 00:02:50.026 CXX test/cpp_headers/sock.o 00:02:50.026 CXX test/cpp_headers/stdinc.o 00:02:50.026 CXX test/cpp_headers/string.o 00:02:50.026 CXX test/cpp_headers/thread.o 00:02:50.026 CXX test/cpp_headers/trace.o 00:02:50.026 CXX test/cpp_headers/trace_parser.o 00:02:50.026 CXX test/cpp_headers/tree.o 00:02:50.026 CXX test/cpp_headers/ublk.o 00:02:50.026 CXX test/cpp_headers/util.o 00:02:50.026 LINK ioat_perf 00:02:50.026 CXX test/cpp_headers/uuid.o 00:02:50.026 CXX test/cpp_headers/version.o 00:02:50.026 CXX test/cpp_headers/vfio_user_pci.o 00:02:50.026 CXX test/cpp_headers/vfio_user_spec.o 00:02:50.026 CXX test/cpp_headers/vhost.o 00:02:50.026 CXX test/cpp_headers/vmd.o 00:02:50.026 CXX test/cpp_headers/xor.o 00:02:50.026 CXX test/cpp_headers/zipf.o 00:02:50.026 LINK pci_ut 00:02:50.286 LINK spdk_bdev 00:02:50.286 LINK test_dma 00:02:50.286 CC examples/vmd/led/led.o 00:02:50.286 CC examples/idxd/perf/perf.o 00:02:50.286 CC examples/sock/hello_world/hello_sock.o 00:02:50.286 LINK nvme_fuzz 00:02:50.286 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.545 CC test/event/reactor/reactor.o 00:02:50.545 CC test/event/event_perf/event_perf.o 00:02:50.545 LINK spdk_nvme 00:02:50.545 CC test/event/reactor_perf/reactor_perf.o 00:02:50.545 CC examples/thread/thread/thread_ex.o 00:02:50.545 CC app/vhost/vhost.o 00:02:50.545 CC test/event/app_repeat/app_repeat.o 00:02:50.545 CC test/event/scheduler/scheduler.o 00:02:50.545 LINK mem_callbacks 00:02:50.545 LINK spdk_nvme_perf 00:02:50.545 LINK lsvmd 00:02:50.545 LINK spdk_top 00:02:50.545 LINK led 00:02:50.545 LINK reactor 00:02:50.545 LINK reactor_perf 00:02:50.545 LINK event_perf 00:02:50.545 LINK vhost_fuzz 00:02:50.545 LINK app_repeat 00:02:50.545 LINK hello_sock 00:02:50.545 LINK vhost 00:02:50.804 LINK spdk_nvme_identify 00:02:50.804 LINK thread 00:02:50.804 LINK scheduler 00:02:50.804 LINK idxd_perf 00:02:50.804 CC test/nvme/reset/reset.o 00:02:50.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.804 CC test/nvme/startup/startup.o 00:02:50.804 CC test/nvme/simple_copy/simple_copy.o 00:02:50.804 CC test/nvme/fused_ordering/fused_ordering.o 00:02:50.804 CC test/nvme/sgl/sgl.o 00:02:50.804 CC test/nvme/aer/aer.o 00:02:50.804 CC test/nvme/err_injection/err_injection.o 00:02:50.804 CC test/nvme/compliance/nvme_compliance.o 00:02:50.804 CC test/nvme/e2edp/nvme_dp.o 00:02:50.804 CC test/nvme/overhead/overhead.o 00:02:50.804 CC test/nvme/boot_partition/boot_partition.o 00:02:50.804 CC test/nvme/fdp/fdp.o 00:02:50.804 CC test/nvme/cuse/cuse.o 00:02:50.804 CC test/nvme/connect_stress/connect_stress.o 00:02:50.804 CC test/nvme/reserve/reserve.o 00:02:50.804 CC test/blobfs/mkfs/mkfs.o 00:02:50.804 CC test/accel/dif/dif.o 00:02:50.804 LINK memory_ut 00:02:51.063 CC test/lvol/esnap/esnap.o 00:02:51.063 LINK startup 00:02:51.063 LINK doorbell_aers 00:02:51.063 LINK boot_partition 00:02:51.063 LINK err_injection 00:02:51.063 LINK fused_ordering 00:02:51.063 LINK connect_stress 00:02:51.063 CC examples/nvme/hello_world/hello_world.o 00:02:51.063 CC examples/nvme/hotplug/hotplug.o 00:02:51.063 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:51.063 CC examples/nvme/abort/abort.o 00:02:51.063 CC examples/nvme/arbitration/arbitration.o 00:02:51.063 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.063 CC examples/nvme/reconnect/reconnect.o 00:02:51.063 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:51.063 LINK reserve 00:02:51.063 LINK simple_copy 00:02:51.063 LINK mkfs 00:02:51.063 LINK sgl 00:02:51.063 LINK reset 00:02:51.063 LINK overhead 00:02:51.063 LINK aer 00:02:51.063 CC examples/accel/perf/accel_perf.o 00:02:51.063 LINK nvme_compliance 00:02:51.063 CC examples/blob/cli/blobcli.o 00:02:51.063 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:51.063 LINK nvme_dp 00:02:51.063 CC examples/blob/hello_world/hello_blob.o 00:02:51.321 LINK fdp 00:02:51.321 LINK pmr_persistence 00:02:51.321 LINK cmb_copy 00:02:51.321 LINK hello_world 00:02:51.321 LINK hotplug 00:02:51.321 LINK arbitration 00:02:51.321 LINK reconnect 00:02:51.321 LINK abort 00:02:51.580 LINK hello_blob 00:02:51.580 LINK hello_fsdev 00:02:51.580 LINK nvme_manage 00:02:51.580 LINK dif 00:02:51.580 LINK iscsi_fuzz 00:02:51.580 LINK accel_perf 00:02:51.580 LINK blobcli 00:02:51.936 LINK cuse 00:02:51.936 CC test/bdev/bdevio/bdevio.o 00:02:52.218 CC examples/bdev/hello_world/hello_bdev.o 00:02:52.218 CC examples/bdev/bdevperf/bdevperf.o 00:02:52.477 LINK bdevio 00:02:52.477 LINK hello_bdev 00:02:52.735 LINK bdevperf 00:02:53.306 CC examples/nvmf/nvmf/nvmf.o 00:02:53.565 LINK nvmf 00:02:54.503 LINK esnap 00:02:55.072 00:02:55.072 real 0m58.195s 00:02:55.072 user 8m18.087s 00:02:55.072 sys 3m21.516s 00:02:55.072 11:25:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.072 11:25:58 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.072 ************************************ 00:02:55.072 END TEST make 00:02:55.072 ************************************ 00:02:55.072 11:25:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.072 11:25:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.072 11:25:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.072 11:25:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.072 11:25:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.072 11:25:58 -- pm/common@44 -- $ pid=1410306 00:02:55.072 11:25:58 -- pm/common@50 -- $ kill -TERM 1410306 00:02:55.072 11:25:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.072 11:25:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.072 11:25:58 -- pm/common@44 -- $ pid=1410308 00:02:55.072 11:25:58 -- pm/common@50 -- $ kill -TERM 1410308 00:02:55.072 11:25:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.072 11:25:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.072 11:25:58 -- pm/common@44 -- $ pid=1410310 00:02:55.072 11:25:58 -- pm/common@50 -- $ kill -TERM 1410310 00:02:55.072 11:25:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.072 11:25:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.072 11:25:58 -- pm/common@44 -- $ pid=1410335 00:02:55.072 11:25:58 -- pm/common@50 -- $ sudo -E kill -TERM 1410335 00:02:55.072 11:25:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:55.072 11:25:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:55.072 11:25:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:55.072 11:25:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:55.072 11:25:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:55.072 11:25:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:55.072 11:25:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:55.072 11:25:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:55.072 11:25:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:55.072 11:25:58 -- scripts/common.sh@336 -- # IFS=.-: 00:02:55.072 11:25:58 -- scripts/common.sh@336 -- # read -ra ver1 00:02:55.072 11:25:58 -- scripts/common.sh@337 -- # IFS=.-: 00:02:55.072 11:25:58 -- scripts/common.sh@337 -- # read -ra ver2 00:02:55.072 11:25:58 -- scripts/common.sh@338 -- # local 'op=<' 00:02:55.072 11:25:58 -- scripts/common.sh@340 -- # ver1_l=2 00:02:55.072 11:25:58 -- scripts/common.sh@341 -- # ver2_l=1 00:02:55.072 11:25:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:55.072 11:25:58 -- scripts/common.sh@344 -- # case "$op" in 00:02:55.072 11:25:58 -- scripts/common.sh@345 -- # : 1 00:02:55.072 11:25:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:55.072 11:25:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.072 11:25:58 -- scripts/common.sh@365 -- # decimal 1 00:02:55.072 11:25:58 -- scripts/common.sh@353 -- # local d=1 00:02:55.072 11:25:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:55.072 11:25:58 -- scripts/common.sh@355 -- # echo 1 00:02:55.072 11:25:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:55.072 11:25:58 -- scripts/common.sh@366 -- # decimal 2 00:02:55.072 11:25:58 -- scripts/common.sh@353 -- # local d=2 00:02:55.072 11:25:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:55.072 11:25:58 -- scripts/common.sh@355 -- # echo 2 00:02:55.072 11:25:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:55.072 11:25:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:55.072 11:25:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:55.073 11:25:58 -- scripts/common.sh@368 -- # return 0 00:02:55.073 11:25:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:55.073 11:25:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:55.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.073 --rc genhtml_branch_coverage=1 00:02:55.073 --rc genhtml_function_coverage=1 00:02:55.073 --rc genhtml_legend=1 00:02:55.073 --rc geninfo_all_blocks=1 00:02:55.073 --rc geninfo_unexecuted_blocks=1 00:02:55.073 00:02:55.073 ' 00:02:55.073 11:25:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:55.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.073 --rc genhtml_branch_coverage=1 00:02:55.073 --rc genhtml_function_coverage=1 00:02:55.073 --rc genhtml_legend=1 00:02:55.073 --rc geninfo_all_blocks=1 00:02:55.073 --rc geninfo_unexecuted_blocks=1 00:02:55.073 00:02:55.073 ' 00:02:55.073 11:25:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:55.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.073 --rc genhtml_branch_coverage=1 00:02:55.073 --rc genhtml_function_coverage=1 00:02:55.073 --rc genhtml_legend=1 00:02:55.073 --rc geninfo_all_blocks=1 00:02:55.073 --rc geninfo_unexecuted_blocks=1 00:02:55.073 00:02:55.073 ' 00:02:55.073 11:25:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:55.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.073 --rc genhtml_branch_coverage=1 00:02:55.073 --rc genhtml_function_coverage=1 00:02:55.073 --rc genhtml_legend=1 00:02:55.073 --rc geninfo_all_blocks=1 00:02:55.073 --rc geninfo_unexecuted_blocks=1 00:02:55.073 00:02:55.073 ' 00:02:55.073 11:25:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.073 11:25:58 -- nvmf/common.sh@7 -- # uname -s 00:02:55.073 11:25:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.073 11:25:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.073 11:25:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.073 11:25:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.073 11:25:58 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.073 11:25:58 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:02:55.073 11:25:58 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.073 11:25:58 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:02:55.073 11:25:58 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:02:55.073 11:25:58 -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:02:55.073 11:25:58 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.073 11:25:58 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:02:55.073 11:25:58 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:02:55.073 11:25:58 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:55.073 11:25:58 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:55.073 11:25:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:55.073 11:25:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.073 11:25:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.073 11:25:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.073 11:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.073 11:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.073 11:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.073 11:25:58 -- paths/export.sh@5 -- # export PATH 00:02:55.073 11:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.073 11:25:58 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:02:55.073 11:25:58 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:02:55.073 11:25:58 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:02:55.073 11:25:58 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:02:55.073 11:25:58 -- nvmf/common.sh@50 -- # : 0 00:02:55.073 11:25:58 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:02:55.073 11:25:58 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:02:55.073 11:25:58 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:02:55.073 11:25:58 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.073 11:25:58 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.073 11:25:58 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:02:55.073 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:02:55.073 11:25:58 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:02:55.073 11:25:58 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:02:55.073 11:25:58 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:02:55.073 11:25:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.073 11:25:58 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.073 11:25:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.073 11:25:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.073 11:25:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:55.073 11:25:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.073 11:25:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:55.073 11:25:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.073 11:25:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.073 11:25:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.073 11:25:58 -- spdk/autotest.sh@48 -- # udevadm_pid=1470230 00:02:55.073 11:25:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:55.073 11:25:58 -- pm/common@17 -- # local monitor 00:02:55.073 11:25:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.073 11:25:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.073 11:25:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.073 11:25:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.073 11:25:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.073 11:25:58 -- pm/common@25 -- # sleep 1 00:02:55.073 11:25:58 -- pm/common@21 -- # date +%s 00:02:55.073 11:25:58 -- pm/common@21 -- # date +%s 00:02:55.073 11:25:58 -- pm/common@21 -- # date +%s 00:02:55.073 11:25:58 -- pm/common@21 -- # date +%s 00:02:55.073 11:25:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732098358 00:02:55.073 11:25:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732098358 00:02:55.073 11:25:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732098358 00:02:55.073 11:25:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732098358 00:02:55.332 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732098358_collect-vmstat.pm.log 00:02:55.332 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732098358_collect-cpu-load.pm.log 00:02:55.333 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732098358_collect-bmc-pm.bmc.pm.log 00:02:55.333 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732098358_collect-cpu-temp.pm.log 00:02:56.269 11:25:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.269 11:25:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:56.269 11:25:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:56.270 11:25:59 -- common/autotest_common.sh@10 -- # set +x 00:02:56.270 11:25:59 -- spdk/autotest.sh@59 -- # create_test_list 00:02:56.270 11:25:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:56.270 11:25:59 -- common/autotest_common.sh@10 -- # set +x 00:02:56.270 11:25:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:56.270 11:25:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:56.270 11:25:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:56.270 11:25:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:56.270 11:25:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:56.270 11:25:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:56.270 11:25:59 -- common/autotest_common.sh@1457 -- # uname 00:02:56.270 11:25:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:56.270 11:25:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:56.270 11:25:59 -- common/autotest_common.sh@1477 -- # uname 00:02:56.270 11:25:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:56.270 11:25:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:56.270 11:25:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:56.270 lcov: LCOV version 1.15 00:02:56.270 11:25:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:08.481 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:08.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:20.684 11:26:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:20.685 11:26:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.685 11:26:23 -- common/autotest_common.sh@10 -- # set +x 00:03:20.685 11:26:23 -- spdk/autotest.sh@78 -- # rm -f 00:03:20.685 11:26:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.217 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:23.217 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:23.217 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:23.476 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:23.734 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:23.734 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:23.734 11:26:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:23.734 11:26:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:23.734 11:26:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:23.734 11:26:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:23.734 11:26:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:23.734 11:26:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:23.734 11:26:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:23.734 11:26:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.734 11:26:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:23.734 11:26:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:23.734 11:26:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.734 11:26:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:23.734 11:26:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:23.735 11:26:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:23.735 11:26:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:23.735 No valid GPT data, bailing 00:03:23.994 11:26:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.994 11:26:27 -- scripts/common.sh@394 -- # pt= 00:03:23.994 11:26:27 -- scripts/common.sh@395 -- # return 1 00:03:23.994 11:26:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:23.994 1+0 records in 00:03:23.994 1+0 records out 00:03:23.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441834 s, 237 MB/s 00:03:23.994 11:26:27 -- spdk/autotest.sh@105 -- # sync 00:03:23.994 11:26:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:23.994 11:26:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:23.994 11:26:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:28.197 11:26:31 -- spdk/autotest.sh@111 -- # uname -s 00:03:28.197 11:26:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:28.197 11:26:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:28.197 11:26:31 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:30.735 Hugepages 00:03:30.993 node hugesize free / total 00:03:30.993 node0 1048576kB 0 / 0 00:03:30.993 node0 2048kB 0 / 0 00:03:30.993 node1 1048576kB 0 / 0 00:03:30.993 node1 2048kB 0 / 0 00:03:30.993 00:03:30.993 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.993 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:30.993 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:30.993 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:30.993 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:30.993 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:30.993 11:26:34 -- spdk/autotest.sh@117 -- # uname -s 00:03:30.993 11:26:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:30.993 11:26:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:30.993 11:26:34 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:33.529 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.529 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.529 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.788 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.048 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.048 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.048 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.338 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.338 11:26:40 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:38.276 11:26:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:38.276 11:26:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:38.276 11:26:41 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:38.276 11:26:41 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:38.276 11:26:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.276 11:26:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.276 11:26:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.276 11:26:41 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.276 11:26:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.276 11:26:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.276 11:26:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5f:00.0 00:03:38.276 11:26:41 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.565 Waiting for block devices as requested 00:03:41.565 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:03:41.565 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:41.565 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:41.565 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:41.565 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:41.823 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:41.823 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:41.823 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:42.082 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:42.082 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:42.082 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:42.082 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:42.341 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:42.341 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:42.341 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:42.600 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:42.600 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:42.859 11:26:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:42.859 11:26:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1487 -- # grep 0000:5f:00.0/nvme/nvme 00:03:42.859 11:26:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:03:42.859 11:26:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:42.859 11:26:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:42.859 11:26:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:42.859 11:26:46 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:42.859 11:26:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:42.859 11:26:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:42.859 11:26:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:42.859 11:26:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:42.859 11:26:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:42.859 11:26:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:42.859 11:26:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:42.859 11:26:46 -- common/autotest_common.sh@1543 -- # continue 00:03:42.859 11:26:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:42.859 11:26:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.859 11:26:46 -- common/autotest_common.sh@10 -- # set +x 00:03:42.859 11:26:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:42.859 11:26:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.859 11:26:46 -- common/autotest_common.sh@10 -- # set +x 00:03:42.859 11:26:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:46.149 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.149 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.149 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.149 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.149 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.149 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.406 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.663 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.949 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.949 11:26:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:49.949 11:26:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.949 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:49.949 11:26:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:49.949 11:26:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:49.949 11:26:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:49.949 11:26:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:49.949 11:26:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:49.949 11:26:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:49.949 11:26:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:49.949 11:26:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:49.949 11:26:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:49.949 11:26:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:49.949 11:26:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.949 11:26:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.949 11:26:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:49.949 11:26:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:49.949 11:26:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5f:00.0 00:03:49.949 11:26:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:49.949 11:26:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:03:49.949 11:26:53 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:49.949 11:26:53 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:49.949 11:26:53 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:49.949 11:26:53 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:49.949 11:26:53 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5f:00.0 00:03:49.949 11:26:53 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5f:00.0 ]] 00:03:49.950 11:26:53 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1482348 00:03:49.950 11:26:53 -- common/autotest_common.sh@1585 -- # waitforlisten 1482348 00:03:49.950 11:26:53 -- common/autotest_common.sh@835 -- # '[' -z 1482348 ']' 00:03:49.950 11:26:53 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.950 11:26:53 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.950 11:26:53 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.950 11:26:53 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.950 11:26:53 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.950 11:26:53 -- common/autotest_common.sh@10 -- # set +x 00:03:49.950 [2024-11-20 11:26:53.212836] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:03:49.950 [2024-11-20 11:26:53.212892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482348 ] 00:03:49.950 [2024-11-20 11:26:53.290891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.950 [2024-11-20 11:26:53.339250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.886 11:26:54 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.886 11:26:54 -- common/autotest_common.sh@868 -- # return 0 00:03:50.886 11:26:54 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:50.886 11:26:54 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:50.886 11:26:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:03:54.177 nvme0n1 00:03:54.177 11:26:57 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:54.177 [2024-11-20 11:26:57.245490] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:54.177 request: 00:03:54.177 { 00:03:54.177 "nvme_ctrlr_name": "nvme0", 00:03:54.177 "password": "test", 00:03:54.177 "method": "bdev_nvme_opal_revert", 00:03:54.177 "req_id": 1 00:03:54.177 } 00:03:54.177 Got JSON-RPC error response 00:03:54.177 response: 00:03:54.177 { 00:03:54.177 "code": -32602, 00:03:54.177 "message": "Invalid parameters" 00:03:54.177 } 00:03:54.177 11:26:57 -- common/autotest_common.sh@1591 -- # true 00:03:54.177 11:26:57 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:54.177 11:26:57 -- common/autotest_common.sh@1595 -- # killprocess 1482348 00:03:54.177 11:26:57 -- common/autotest_common.sh@954 -- # '[' -z 1482348 ']' 00:03:54.177 11:26:57 -- common/autotest_common.sh@958 -- # kill -0 1482348 00:03:54.177 11:26:57 -- common/autotest_common.sh@959 -- # uname 00:03:54.177 11:26:57 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.177 11:26:57 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482348 00:03:54.177 11:26:57 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.177 11:26:57 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.177 11:26:57 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482348' 00:03:54.177 killing process with pid 1482348 00:03:54.178 11:26:57 -- common/autotest_common.sh@973 -- # kill 1482348 00:03:54.178 11:26:57 -- common/autotest_common.sh@978 -- # wait 1482348 00:03:58.369 11:27:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:58.369 11:27:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:58.369 11:27:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:58.369 11:27:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:58.369 11:27:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:58.369 11:27:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.369 11:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:58.369 11:27:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:58.369 11:27:01 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:58.369 11:27:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.369 11:27:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.369 11:27:01 -- common/autotest_common.sh@10 -- # set +x 00:03:58.369 ************************************ 00:03:58.369 START TEST env 00:03:58.369 ************************************ 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:58.369 * Looking for test storage... 00:03:58.369 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.369 11:27:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.369 11:27:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.369 11:27:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.369 11:27:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.369 11:27:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.369 11:27:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.369 11:27:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.369 11:27:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.369 11:27:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.369 11:27:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.369 11:27:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.369 11:27:01 env -- scripts/common.sh@344 -- # case "$op" in 00:03:58.369 11:27:01 env -- scripts/common.sh@345 -- # : 1 00:03:58.369 11:27:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.369 11:27:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.369 11:27:01 env -- scripts/common.sh@365 -- # decimal 1 00:03:58.369 11:27:01 env -- scripts/common.sh@353 -- # local d=1 00:03:58.369 11:27:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.369 11:27:01 env -- scripts/common.sh@355 -- # echo 1 00:03:58.369 11:27:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.369 11:27:01 env -- scripts/common.sh@366 -- # decimal 2 00:03:58.369 11:27:01 env -- scripts/common.sh@353 -- # local d=2 00:03:58.369 11:27:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.369 11:27:01 env -- scripts/common.sh@355 -- # echo 2 00:03:58.369 11:27:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.369 11:27:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.369 11:27:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.369 11:27:01 env -- scripts/common.sh@368 -- # return 0 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:58.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.369 --rc genhtml_branch_coverage=1 00:03:58.369 --rc genhtml_function_coverage=1 00:03:58.369 --rc genhtml_legend=1 00:03:58.369 --rc geninfo_all_blocks=1 00:03:58.369 --rc geninfo_unexecuted_blocks=1 00:03:58.369 00:03:58.369 ' 00:03:58.369 11:27:01 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:58.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.369 --rc genhtml_branch_coverage=1 00:03:58.369 --rc genhtml_function_coverage=1 00:03:58.369 --rc genhtml_legend=1 00:03:58.369 --rc geninfo_all_blocks=1 00:03:58.370 --rc geninfo_unexecuted_blocks=1 00:03:58.370 00:03:58.370 ' 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.370 --rc genhtml_branch_coverage=1 00:03:58.370 --rc genhtml_function_coverage=1 00:03:58.370 --rc genhtml_legend=1 00:03:58.370 --rc geninfo_all_blocks=1 00:03:58.370 --rc geninfo_unexecuted_blocks=1 00:03:58.370 00:03:58.370 ' 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:58.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.370 --rc genhtml_branch_coverage=1 00:03:58.370 --rc genhtml_function_coverage=1 00:03:58.370 --rc genhtml_legend=1 00:03:58.370 --rc geninfo_all_blocks=1 00:03:58.370 --rc geninfo_unexecuted_blocks=1 00:03:58.370 00:03:58.370 ' 00:03:58.370 11:27:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.370 11:27:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.370 ************************************ 00:03:58.370 START TEST env_memory 00:03:58.370 ************************************ 00:03:58.370 11:27:01 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:58.370 00:03:58.370 00:03:58.370 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.370 http://cunit.sourceforge.net/ 00:03:58.370 00:03:58.370 00:03:58.370 Suite: memory 00:03:58.370 Test: alloc and free memory map ...[2024-11-20 11:27:01.562885] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:58.370 passed 00:03:58.370 Test: mem map translation ...[2024-11-20 11:27:01.581830] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:58.370 [2024-11-20 11:27:01.581861] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:58.370 [2024-11-20 11:27:01.581899] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:58.370 [2024-11-20 11:27:01.581908] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:58.370 passed 00:03:58.370 Test: mem map registration ...[2024-11-20 11:27:01.622630] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:58.370 [2024-11-20 11:27:01.622647] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:58.370 passed 00:03:58.370 Test: mem map adjacent registrations ...passed 00:03:58.370 00:03:58.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.370 suites 1 1 n/a 0 0 00:03:58.370 tests 4 4 4 0 0 00:03:58.370 asserts 152 152 152 0 n/a 00:03:58.370 00:03:58.370 Elapsed time = 0.142 seconds 00:03:58.370 00:03:58.370 real 0m0.157s 00:03:58.370 user 0m0.144s 00:03:58.370 sys 0m0.012s 00:03:58.370 11:27:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.370 11:27:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:58.370 ************************************ 00:03:58.370 END TEST env_memory 00:03:58.370 ************************************ 00:03:58.370 11:27:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.370 11:27:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.370 11:27:01 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.370 ************************************ 00:03:58.370 START TEST env_vtophys 00:03:58.370 ************************************ 00:03:58.370 11:27:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:58.370 EAL: lib.eal log level changed from notice to debug 00:03:58.370 EAL: Detected lcore 0 as core 0 on socket 0 00:03:58.370 EAL: Detected lcore 1 as core 1 on socket 0 00:03:58.370 EAL: Detected lcore 2 as core 2 on socket 0 00:03:58.370 EAL: Detected lcore 3 as core 3 on socket 0 00:03:58.370 EAL: Detected lcore 4 as core 4 on socket 0 00:03:58.370 EAL: Detected lcore 5 as core 8 on socket 0 00:03:58.370 EAL: Detected lcore 6 as core 9 on socket 0 00:03:58.370 EAL: Detected lcore 7 as core 10 on socket 0 00:03:58.370 EAL: Detected lcore 8 as core 11 on socket 0 00:03:58.370 EAL: Detected lcore 9 as core 16 on socket 0 00:03:58.370 EAL: Detected lcore 10 as core 17 on socket 0 00:03:58.370 EAL: Detected lcore 11 as core 18 on socket 0 00:03:58.370 EAL: Detected lcore 12 as core 19 on socket 0 00:03:58.370 EAL: Detected lcore 13 as core 20 on socket 0 00:03:58.370 EAL: Detected lcore 14 as core 24 on socket 0 00:03:58.370 EAL: Detected lcore 15 as core 25 on socket 0 00:03:58.370 EAL: Detected lcore 16 as core 26 on socket 0 00:03:58.370 EAL: Detected lcore 17 as core 27 on socket 0 00:03:58.370 EAL: Detected lcore 18 as core 0 on socket 1 00:03:58.370 EAL: Detected lcore 19 as core 1 on socket 1 00:03:58.370 EAL: Detected lcore 20 as core 2 on socket 1 00:03:58.370 EAL: Detected lcore 21 as core 3 on socket 1 00:03:58.370 EAL: Detected lcore 22 as core 4 on socket 1 00:03:58.370 EAL: Detected lcore 23 as core 8 on socket 1 00:03:58.370 EAL: Detected lcore 24 as core 9 on socket 1 00:03:58.370 EAL: Detected lcore 25 as core 10 on socket 1 00:03:58.370 EAL: Detected lcore 26 as core 11 on socket 1 00:03:58.370 EAL: Detected lcore 27 as core 16 on socket 1 00:03:58.370 EAL: Detected lcore 28 as core 17 on socket 1 00:03:58.370 EAL: Detected lcore 29 as core 18 on socket 1 00:03:58.370 EAL: Detected lcore 30 as core 19 on socket 1 00:03:58.370 EAL: Detected lcore 31 as core 20 on socket 1 00:03:58.370 EAL: Detected lcore 32 as core 24 on socket 1 00:03:58.370 EAL: Detected lcore 33 as core 25 on socket 1 00:03:58.370 EAL: Detected lcore 34 as core 26 on socket 1 00:03:58.370 EAL: Detected lcore 35 as core 27 on socket 1 00:03:58.370 EAL: Detected lcore 36 as core 0 on socket 0 00:03:58.370 EAL: Detected lcore 37 as core 1 on socket 0 00:03:58.370 EAL: Detected lcore 38 as core 2 on socket 0 00:03:58.370 EAL: Detected lcore 39 as core 3 on socket 0 00:03:58.370 EAL: Detected lcore 40 as core 4 on socket 0 00:03:58.370 EAL: Detected lcore 41 as core 8 on socket 0 00:03:58.370 EAL: Detected lcore 42 as core 9 on socket 0 00:03:58.370 EAL: Detected lcore 43 as core 10 on socket 0 00:03:58.370 EAL: Detected lcore 44 as core 11 on socket 0 00:03:58.370 EAL: Detected lcore 45 as core 16 on socket 0 00:03:58.370 EAL: Detected lcore 46 as core 17 on socket 0 00:03:58.370 EAL: Detected lcore 47 as core 18 on socket 0 00:03:58.370 EAL: Detected lcore 48 as core 19 on socket 0 00:03:58.370 EAL: Detected lcore 49 as core 20 on socket 0 00:03:58.370 EAL: Detected lcore 50 as core 24 on socket 0 00:03:58.370 EAL: Detected lcore 51 as core 25 on socket 0 00:03:58.370 EAL: Detected lcore 52 as core 26 on socket 0 00:03:58.370 EAL: Detected lcore 53 as core 27 on socket 0 00:03:58.370 EAL: Detected lcore 54 as core 0 on socket 1 00:03:58.370 EAL: Detected lcore 55 as core 1 on socket 1 00:03:58.370 EAL: Detected lcore 56 as core 2 on socket 1 00:03:58.370 EAL: Detected lcore 57 as core 3 on socket 1 00:03:58.370 EAL: Detected lcore 58 as core 4 on socket 1 00:03:58.370 EAL: Detected lcore 59 as core 8 on socket 1 00:03:58.370 EAL: Detected lcore 60 as core 9 on socket 1 00:03:58.370 EAL: Detected lcore 61 as core 10 on socket 1 00:03:58.370 EAL: Detected lcore 62 as core 11 on socket 1 00:03:58.370 EAL: Detected lcore 63 as core 16 on socket 1 00:03:58.370 EAL: Detected lcore 64 as core 17 on socket 1 00:03:58.370 EAL: Detected lcore 65 as core 18 on socket 1 00:03:58.370 EAL: Detected lcore 66 as core 19 on socket 1 00:03:58.370 EAL: Detected lcore 67 as core 20 on socket 1 00:03:58.370 EAL: Detected lcore 68 as core 24 on socket 1 00:03:58.370 EAL: Detected lcore 69 as core 25 on socket 1 00:03:58.370 EAL: Detected lcore 70 as core 26 on socket 1 00:03:58.370 EAL: Detected lcore 71 as core 27 on socket 1 00:03:58.370 EAL: Maximum logical cores by configuration: 128 00:03:58.370 EAL: Detected CPU lcores: 72 00:03:58.370 EAL: Detected NUMA nodes: 2 00:03:58.370 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:58.370 EAL: Detected shared linkage of DPDK 00:03:58.370 EAL: No shared files mode enabled, IPC will be disabled 00:03:58.370 EAL: Bus pci wants IOVA as 'DC' 00:03:58.370 EAL: Buses did not request a specific IOVA mode. 00:03:58.370 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:58.370 EAL: Selected IOVA mode 'VA' 00:03:58.370 EAL: Probing VFIO support... 00:03:58.370 EAL: IOMMU type 1 (Type 1) is supported 00:03:58.370 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:58.370 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:58.370 EAL: VFIO support initialized 00:03:58.371 EAL: Ask a virtual area of 0x2e000 bytes 00:03:58.371 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:58.371 EAL: Setting up physically contiguous memory... 00:03:58.371 EAL: Setting maximum number of open files to 524288 00:03:58.371 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:58.371 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:58.371 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:58.371 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:58.371 EAL: Ask a virtual area of 0x61000 bytes 00:03:58.371 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:58.371 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:58.371 EAL: Ask a virtual area of 0x400000000 bytes 00:03:58.371 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:58.371 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:58.371 EAL: Hugepages will be freed exactly as allocated. 00:03:58.371 EAL: No shared files mode enabled, IPC is disabled 00:03:58.371 EAL: No shared files mode enabled, IPC is disabled 00:03:58.371 EAL: TSC frequency is ~2300000 KHz 00:03:58.371 EAL: Main lcore 0 is ready (tid=7f634a3f2a00;cpuset=[0]) 00:03:58.371 EAL: Trying to obtain current memory policy. 00:03:58.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.371 EAL: Restoring previous memory policy: 0 00:03:58.371 EAL: request: mp_malloc_sync 00:03:58.371 EAL: No shared files mode enabled, IPC is disabled 00:03:58.371 EAL: Heap on socket 0 was expanded by 2MB 00:03:58.371 EAL: No shared files mode enabled, IPC is disabled 00:03:58.371 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:58.371 EAL: Mem event callback 'spdk:(nil)' registered 00:03:58.630 00:03:58.630 00:03:58.630 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.630 http://cunit.sourceforge.net/ 00:03:58.630 00:03:58.630 00:03:58.630 Suite: components_suite 00:03:58.630 Test: vtophys_malloc_test ...passed 00:03:58.630 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:58.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.630 EAL: Restoring previous memory policy: 4 00:03:58.630 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.630 EAL: request: mp_malloc_sync 00:03:58.630 EAL: No shared files mode enabled, IPC is disabled 00:03:58.630 EAL: Heap on socket 0 was expanded by 4MB 00:03:58.630 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.630 EAL: request: mp_malloc_sync 00:03:58.630 EAL: No shared files mode enabled, IPC is disabled 00:03:58.630 EAL: Heap on socket 0 was shrunk by 4MB 00:03:58.630 EAL: Trying to obtain current memory policy. 00:03:58.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.630 EAL: Restoring previous memory policy: 4 00:03:58.630 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 6MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 6MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 10MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 10MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 18MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 18MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 34MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.631 EAL: Trying to obtain current memory policy. 00:03:58.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.631 EAL: Restoring previous memory policy: 4 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.631 EAL: request: mp_malloc_sync 00:03:58.631 EAL: No shared files mode enabled, IPC is disabled 00:03:58.631 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.631 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.890 EAL: request: mp_malloc_sync 00:03:58.890 EAL: No shared files mode enabled, IPC is disabled 00:03:58.890 EAL: Heap on socket 0 was shrunk by 258MB 00:03:58.890 EAL: Trying to obtain current memory policy. 00:03:58.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.890 EAL: Restoring previous memory policy: 4 00:03:58.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.890 EAL: request: mp_malloc_sync 00:03:58.890 EAL: No shared files mode enabled, IPC is disabled 00:03:58.890 EAL: Heap on socket 0 was expanded by 514MB 00:03:58.890 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.149 EAL: request: mp_malloc_sync 00:03:59.149 EAL: No shared files mode enabled, IPC is disabled 00:03:59.149 EAL: Heap on socket 0 was shrunk by 514MB 00:03:59.149 EAL: Trying to obtain current memory policy. 00:03:59.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.409 EAL: Restoring previous memory policy: 4 00:03:59.409 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.409 EAL: request: mp_malloc_sync 00:03:59.409 EAL: No shared files mode enabled, IPC is disabled 00:03:59.409 EAL: Heap on socket 0 was expanded by 1026MB 00:03:59.409 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.669 EAL: request: mp_malloc_sync 00:03:59.669 EAL: No shared files mode enabled, IPC is disabled 00:03:59.669 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:59.669 passed 00:03:59.669 00:03:59.669 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.669 suites 1 1 n/a 0 0 00:03:59.669 tests 2 2 2 0 0 00:03:59.669 asserts 497 497 497 0 n/a 00:03:59.669 00:03:59.669 Elapsed time = 1.132 seconds 00:03:59.669 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.669 EAL: request: mp_malloc_sync 00:03:59.669 EAL: No shared files mode enabled, IPC is disabled 00:03:59.669 EAL: Heap on socket 0 was shrunk by 2MB 00:03:59.669 EAL: No shared files mode enabled, IPC is disabled 00:03:59.669 EAL: No shared files mode enabled, IPC is disabled 00:03:59.669 EAL: No shared files mode enabled, IPC is disabled 00:03:59.669 00:03:59.669 real 0m1.272s 00:03:59.669 user 0m0.735s 00:03:59.669 sys 0m0.508s 00:03:59.669 11:27:03 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.669 11:27:03 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 ************************************ 00:03:59.669 END TEST env_vtophys 00:03:59.669 ************************************ 00:03:59.669 11:27:03 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.669 11:27:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.669 11:27:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.669 11:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 ************************************ 00:03:59.669 START TEST env_pci 00:03:59.669 ************************************ 00:03:59.669 11:27:03 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:59.669 00:03:59.669 00:03:59.669 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.669 http://cunit.sourceforge.net/ 00:03:59.669 00:03:59.669 00:03:59.669 Suite: pci 00:03:59.669 Test: pci_hook ...[2024-11-20 11:27:03.114756] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1483848 has claimed it 00:03:59.929 EAL: Cannot find device (10000:00:01.0) 00:03:59.929 EAL: Failed to attach device on primary process 00:03:59.929 passed 00:03:59.929 00:03:59.929 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.929 suites 1 1 n/a 0 0 00:03:59.929 tests 1 1 1 0 0 00:03:59.929 asserts 25 25 25 0 n/a 00:03:59.929 00:03:59.929 Elapsed time = 0.031 seconds 00:03:59.929 00:03:59.929 real 0m0.053s 00:03:59.929 user 0m0.016s 00:03:59.929 sys 0m0.037s 00:03:59.929 11:27:03 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.929 11:27:03 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:59.929 ************************************ 00:03:59.929 END TEST env_pci 00:03:59.929 ************************************ 00:03:59.929 11:27:03 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:59.929 11:27:03 env -- env/env.sh@15 -- # uname 00:03:59.929 11:27:03 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:59.929 11:27:03 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:59.929 11:27:03 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.929 11:27:03 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:59.929 11:27:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.929 11:27:03 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.929 ************************************ 00:03:59.929 START TEST env_dpdk_post_init 00:03:59.929 ************************************ 00:03:59.929 11:27:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:59.929 EAL: Detected CPU lcores: 72 00:03:59.929 EAL: Detected NUMA nodes: 2 00:03:59.929 EAL: Detected shared linkage of DPDK 00:03:59.929 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.929 EAL: Selected IOVA mode 'VA' 00:03:59.929 EAL: VFIO support initialized 00:03:59.929 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.929 EAL: Using IOMMU type 1 (Type 1) 00:03:59.929 EAL: Ignore mapping IO port bar(1) 00:03:59.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:59.929 EAL: Ignore mapping IO port bar(1) 00:03:59.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:00.189 EAL: Ignore mapping IO port bar(1) 00:04:00.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:00.757 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:04:00.757 EAL: Ignore mapping IO port bar(1) 00:04:00.757 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:00.757 EAL: Ignore mapping IO port bar(1) 00:04:00.757 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:01.016 EAL: Ignore mapping IO port bar(1) 00:04:01.016 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:06.375 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:04:06.375 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:04:06.644 Starting DPDK initialization... 00:04:06.645 Starting SPDK post initialization... 00:04:06.645 SPDK NVMe probe 00:04:06.645 Attaching to 0000:5f:00.0 00:04:06.645 Attached to 0000:5f:00.0 00:04:06.645 Cleaning up... 00:04:06.645 00:04:06.645 real 0m6.656s 00:04:06.645 user 0m4.825s 00:04:06.645 sys 0m0.890s 00:04:06.645 11:27:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.645 11:27:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.645 ************************************ 00:04:06.645 END TEST env_dpdk_post_init 00:04:06.645 ************************************ 00:04:06.645 11:27:09 env -- env/env.sh@26 -- # uname 00:04:06.645 11:27:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.645 11:27:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.645 11:27:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.645 11:27:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.645 11:27:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.645 ************************************ 00:04:06.645 START TEST env_mem_callbacks 00:04:06.645 ************************************ 00:04:06.645 11:27:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.645 EAL: Detected CPU lcores: 72 00:04:06.645 EAL: Detected NUMA nodes: 2 00:04:06.645 EAL: Detected shared linkage of DPDK 00:04:06.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.645 EAL: Selected IOVA mode 'VA' 00:04:06.645 EAL: VFIO support initialized 00:04:06.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.645 00:04:06.645 00:04:06.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.645 http://cunit.sourceforge.net/ 00:04:06.645 00:04:06.645 00:04:06.645 Suite: memory 00:04:06.645 Test: test ... 00:04:06.645 register 0x200000200000 2097152 00:04:06.645 malloc 3145728 00:04:06.645 register 0x200000400000 4194304 00:04:06.645 buf 0x200000500000 len 3145728 PASSED 00:04:06.645 malloc 64 00:04:06.645 buf 0x2000004fff40 len 64 PASSED 00:04:06.645 malloc 4194304 00:04:06.645 register 0x200000800000 6291456 00:04:06.645 buf 0x200000a00000 len 4194304 PASSED 00:04:06.645 free 0x200000500000 3145728 00:04:06.645 free 0x2000004fff40 64 00:04:06.645 unregister 0x200000400000 4194304 PASSED 00:04:06.645 free 0x200000a00000 4194304 00:04:06.645 unregister 0x200000800000 6291456 PASSED 00:04:06.645 malloc 8388608 00:04:06.645 register 0x200000400000 10485760 00:04:06.645 buf 0x200000600000 len 8388608 PASSED 00:04:06.645 free 0x200000600000 8388608 00:04:06.645 unregister 0x200000400000 10485760 PASSED 00:04:06.645 passed 00:04:06.645 00:04:06.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.645 suites 1 1 n/a 0 0 00:04:06.645 tests 1 1 1 0 0 00:04:06.645 asserts 15 15 15 0 n/a 00:04:06.645 00:04:06.645 Elapsed time = 0.005 seconds 00:04:06.645 00:04:06.645 real 0m0.050s 00:04:06.645 user 0m0.017s 00:04:06.645 sys 0m0.033s 00:04:06.645 11:27:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.645 11:27:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.645 ************************************ 00:04:06.645 END TEST env_mem_callbacks 00:04:06.645 ************************************ 00:04:06.645 00:04:06.645 real 0m8.778s 00:04:06.645 user 0m5.973s 00:04:06.645 sys 0m1.878s 00:04:06.645 11:27:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.645 11:27:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.645 ************************************ 00:04:06.645 END TEST env 00:04:06.645 ************************************ 00:04:06.645 11:27:10 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.645 11:27:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.645 11:27:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.645 11:27:10 -- common/autotest_common.sh@10 -- # set +x 00:04:06.903 ************************************ 00:04:06.903 START TEST rpc 00:04:06.903 ************************************ 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.903 * Looking for test storage... 00:04:06.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.903 11:27:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.903 11:27:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.903 11:27:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.903 11:27:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.903 11:27:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.903 11:27:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.903 11:27:10 rpc -- scripts/common.sh@345 -- # : 1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.903 11:27:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.903 11:27:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.903 11:27:10 rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.903 11:27:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.903 11:27:10 rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.903 11:27:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.903 11:27:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.903 11:27:10 rpc -- scripts/common.sh@368 -- # return 0 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.903 --rc genhtml_branch_coverage=1 00:04:06.903 --rc genhtml_function_coverage=1 00:04:06.903 --rc genhtml_legend=1 00:04:06.903 --rc geninfo_all_blocks=1 00:04:06.903 --rc geninfo_unexecuted_blocks=1 00:04:06.903 00:04:06.903 ' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.903 --rc genhtml_branch_coverage=1 00:04:06.903 --rc genhtml_function_coverage=1 00:04:06.903 --rc genhtml_legend=1 00:04:06.903 --rc geninfo_all_blocks=1 00:04:06.903 --rc geninfo_unexecuted_blocks=1 00:04:06.903 00:04:06.903 ' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.903 --rc genhtml_branch_coverage=1 00:04:06.903 --rc genhtml_function_coverage=1 00:04:06.903 --rc genhtml_legend=1 00:04:06.903 --rc geninfo_all_blocks=1 00:04:06.903 --rc geninfo_unexecuted_blocks=1 00:04:06.903 00:04:06.903 ' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.903 --rc genhtml_branch_coverage=1 00:04:06.903 --rc genhtml_function_coverage=1 00:04:06.903 --rc genhtml_legend=1 00:04:06.903 --rc geninfo_all_blocks=1 00:04:06.903 --rc geninfo_unexecuted_blocks=1 00:04:06.903 00:04:06.903 ' 00:04:06.903 11:27:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1485274 00:04:06.903 11:27:10 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.903 11:27:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.903 11:27:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1485274 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 1485274 ']' 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.903 11:27:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.903 [2024-11-20 11:27:10.380584] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:06.904 [2024-11-20 11:27:10.380650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485274 ] 00:04:07.162 [2024-11-20 11:27:10.464728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.162 [2024-11-20 11:27:10.508598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.162 [2024-11-20 11:27:10.508643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1485274' to capture a snapshot of events at runtime. 00:04:07.162 [2024-11-20 11:27:10.508652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.162 [2024-11-20 11:27:10.508676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.162 [2024-11-20 11:27:10.508684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1485274 for offline analysis/debug. 00:04:07.162 [2024-11-20 11:27:10.509144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.420 11:27:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.421 11:27:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:07.421 11:27:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:07.421 11:27:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:07.421 11:27:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.421 11:27:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.421 11:27:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.421 11:27:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.421 11:27:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.421 ************************************ 00:04:07.421 START TEST rpc_integrity 00:04:07.421 ************************************ 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.421 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.421 { 00:04:07.421 "name": "Malloc0", 00:04:07.421 "aliases": [ 00:04:07.421 "f074191a-9d1a-4f9f-b1d1-2a1f50d89856" 00:04:07.421 ], 00:04:07.421 "product_name": "Malloc disk", 00:04:07.421 "block_size": 512, 00:04:07.421 "num_blocks": 16384, 00:04:07.421 "uuid": "f074191a-9d1a-4f9f-b1d1-2a1f50d89856", 00:04:07.421 "assigned_rate_limits": { 00:04:07.421 "rw_ios_per_sec": 0, 00:04:07.421 "rw_mbytes_per_sec": 0, 00:04:07.421 "r_mbytes_per_sec": 0, 00:04:07.421 "w_mbytes_per_sec": 0 00:04:07.421 }, 00:04:07.421 "claimed": false, 00:04:07.421 "zoned": false, 00:04:07.421 "supported_io_types": { 00:04:07.421 "read": true, 00:04:07.421 "write": true, 00:04:07.421 "unmap": true, 00:04:07.421 "flush": true, 00:04:07.421 "reset": true, 00:04:07.421 "nvme_admin": false, 00:04:07.421 "nvme_io": false, 00:04:07.421 "nvme_io_md": false, 00:04:07.421 "write_zeroes": true, 00:04:07.421 "zcopy": true, 00:04:07.421 "get_zone_info": false, 00:04:07.421 "zone_management": false, 00:04:07.421 "zone_append": false, 00:04:07.421 "compare": false, 00:04:07.421 "compare_and_write": false, 00:04:07.421 "abort": true, 00:04:07.421 "seek_hole": false, 00:04:07.421 "seek_data": false, 00:04:07.421 "copy": true, 00:04:07.421 "nvme_iov_md": false 00:04:07.421 }, 00:04:07.421 "memory_domains": [ 00:04:07.421 { 00:04:07.421 "dma_device_id": "system", 00:04:07.421 "dma_device_type": 1 00:04:07.421 }, 00:04:07.421 { 00:04:07.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.421 "dma_device_type": 2 00:04:07.421 } 00:04:07.421 ], 00:04:07.421 "driver_specific": {} 00:04:07.421 } 00:04:07.421 ]' 00:04:07.421 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 [2024-11-20 11:27:10.914431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.680 [2024-11-20 11:27:10.914464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.680 [2024-11-20 11:27:10.914478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1437a20 00:04:07.680 [2024-11-20 11:27:10.914487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.680 [2024-11-20 11:27:10.915660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.680 [2024-11-20 11:27:10.915682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.680 Passthru0 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.680 { 00:04:07.680 "name": "Malloc0", 00:04:07.680 "aliases": [ 00:04:07.680 "f074191a-9d1a-4f9f-b1d1-2a1f50d89856" 00:04:07.680 ], 00:04:07.680 "product_name": "Malloc disk", 00:04:07.680 "block_size": 512, 00:04:07.680 "num_blocks": 16384, 00:04:07.680 "uuid": "f074191a-9d1a-4f9f-b1d1-2a1f50d89856", 00:04:07.680 "assigned_rate_limits": { 00:04:07.680 "rw_ios_per_sec": 0, 00:04:07.680 "rw_mbytes_per_sec": 0, 00:04:07.680 "r_mbytes_per_sec": 0, 00:04:07.680 "w_mbytes_per_sec": 0 00:04:07.680 }, 00:04:07.680 "claimed": true, 00:04:07.680 "claim_type": "exclusive_write", 00:04:07.680 "zoned": false, 00:04:07.680 "supported_io_types": { 00:04:07.680 "read": true, 00:04:07.680 "write": true, 00:04:07.680 "unmap": true, 00:04:07.680 "flush": true, 00:04:07.680 "reset": true, 00:04:07.680 "nvme_admin": false, 00:04:07.680 "nvme_io": false, 00:04:07.680 "nvme_io_md": false, 00:04:07.680 "write_zeroes": true, 00:04:07.680 "zcopy": true, 00:04:07.680 "get_zone_info": false, 00:04:07.680 "zone_management": false, 00:04:07.680 "zone_append": false, 00:04:07.680 "compare": false, 00:04:07.680 "compare_and_write": false, 00:04:07.680 "abort": true, 00:04:07.680 "seek_hole": false, 00:04:07.680 "seek_data": false, 00:04:07.680 "copy": true, 00:04:07.680 "nvme_iov_md": false 00:04:07.680 }, 00:04:07.680 "memory_domains": [ 00:04:07.680 { 00:04:07.680 "dma_device_id": "system", 00:04:07.680 "dma_device_type": 1 00:04:07.680 }, 00:04:07.680 { 00:04:07.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.680 "dma_device_type": 2 00:04:07.680 } 00:04:07.680 ], 00:04:07.680 "driver_specific": {} 00:04:07.680 }, 00:04:07.680 { 00:04:07.680 "name": "Passthru0", 00:04:07.680 "aliases": [ 00:04:07.680 "768c35cd-24e4-59b8-b08c-157eb437bebb" 00:04:07.680 ], 00:04:07.680 "product_name": "passthru", 00:04:07.680 "block_size": 512, 00:04:07.680 "num_blocks": 16384, 00:04:07.680 "uuid": "768c35cd-24e4-59b8-b08c-157eb437bebb", 00:04:07.680 "assigned_rate_limits": { 00:04:07.680 "rw_ios_per_sec": 0, 00:04:07.680 "rw_mbytes_per_sec": 0, 00:04:07.680 "r_mbytes_per_sec": 0, 00:04:07.680 "w_mbytes_per_sec": 0 00:04:07.680 }, 00:04:07.680 "claimed": false, 00:04:07.680 "zoned": false, 00:04:07.680 "supported_io_types": { 00:04:07.680 "read": true, 00:04:07.680 "write": true, 00:04:07.680 "unmap": true, 00:04:07.680 "flush": true, 00:04:07.680 "reset": true, 00:04:07.680 "nvme_admin": false, 00:04:07.680 "nvme_io": false, 00:04:07.680 "nvme_io_md": false, 00:04:07.680 "write_zeroes": true, 00:04:07.680 "zcopy": true, 00:04:07.680 "get_zone_info": false, 00:04:07.680 "zone_management": false, 00:04:07.680 "zone_append": false, 00:04:07.680 "compare": false, 00:04:07.680 "compare_and_write": false, 00:04:07.680 "abort": true, 00:04:07.680 "seek_hole": false, 00:04:07.680 "seek_data": false, 00:04:07.680 "copy": true, 00:04:07.680 "nvme_iov_md": false 00:04:07.680 }, 00:04:07.680 "memory_domains": [ 00:04:07.680 { 00:04:07.680 "dma_device_id": "system", 00:04:07.680 "dma_device_type": 1 00:04:07.680 }, 00:04:07.680 { 00:04:07.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.680 "dma_device_type": 2 00:04:07.680 } 00:04:07.680 ], 00:04:07.680 "driver_specific": { 00:04:07.680 "passthru": { 00:04:07.680 "name": "Passthru0", 00:04:07.680 "base_bdev_name": "Malloc0" 00:04:07.680 } 00:04:07.680 } 00:04:07.680 } 00:04:07.680 ]' 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.680 11:27:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.680 11:27:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.680 00:04:07.680 real 0m0.239s 00:04:07.680 user 0m0.137s 00:04:07.680 sys 0m0.039s 00:04:07.680 11:27:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.680 11:27:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 ************************************ 00:04:07.680 END TEST rpc_integrity 00:04:07.680 ************************************ 00:04:07.680 11:27:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.680 11:27:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.680 11:27:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.680 11:27:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 ************************************ 00:04:07.680 START TEST rpc_plugins 00:04:07.680 ************************************ 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:07.680 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.680 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.680 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.680 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.680 { 00:04:07.680 "name": "Malloc1", 00:04:07.680 "aliases": [ 00:04:07.680 "1244efab-308a-4702-a875-4d1557f7e3c6" 00:04:07.680 ], 00:04:07.680 "product_name": "Malloc disk", 00:04:07.680 "block_size": 4096, 00:04:07.680 "num_blocks": 256, 00:04:07.680 "uuid": "1244efab-308a-4702-a875-4d1557f7e3c6", 00:04:07.680 "assigned_rate_limits": { 00:04:07.680 "rw_ios_per_sec": 0, 00:04:07.680 "rw_mbytes_per_sec": 0, 00:04:07.680 "r_mbytes_per_sec": 0, 00:04:07.680 "w_mbytes_per_sec": 0 00:04:07.680 }, 00:04:07.680 "claimed": false, 00:04:07.680 "zoned": false, 00:04:07.680 "supported_io_types": { 00:04:07.680 "read": true, 00:04:07.680 "write": true, 00:04:07.680 "unmap": true, 00:04:07.680 "flush": true, 00:04:07.680 "reset": true, 00:04:07.680 "nvme_admin": false, 00:04:07.680 "nvme_io": false, 00:04:07.680 "nvme_io_md": false, 00:04:07.680 "write_zeroes": true, 00:04:07.680 "zcopy": true, 00:04:07.680 "get_zone_info": false, 00:04:07.680 "zone_management": false, 00:04:07.680 "zone_append": false, 00:04:07.680 "compare": false, 00:04:07.680 "compare_and_write": false, 00:04:07.680 "abort": true, 00:04:07.680 "seek_hole": false, 00:04:07.680 "seek_data": false, 00:04:07.681 "copy": true, 00:04:07.681 "nvme_iov_md": false 00:04:07.681 }, 00:04:07.681 "memory_domains": [ 00:04:07.681 { 00:04:07.681 "dma_device_id": "system", 00:04:07.681 "dma_device_type": 1 00:04:07.681 }, 00:04:07.681 { 00:04:07.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.681 "dma_device_type": 2 00:04:07.681 } 00:04:07.681 ], 00:04:07.681 "driver_specific": {} 00:04:07.681 } 00:04:07.681 ]' 00:04:07.681 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.938 11:27:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.938 00:04:07.938 real 0m0.146s 00:04:07.938 user 0m0.085s 00:04:07.938 sys 0m0.023s 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.938 11:27:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.938 ************************************ 00:04:07.938 END TEST rpc_plugins 00:04:07.938 ************************************ 00:04:07.938 11:27:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.938 11:27:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.938 11:27:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.938 11:27:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.938 ************************************ 00:04:07.938 START TEST rpc_trace_cmd_test 00:04:07.938 ************************************ 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.938 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.938 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1485274", 00:04:07.938 "tpoint_group_mask": "0x8", 00:04:07.938 "iscsi_conn": { 00:04:07.938 "mask": "0x2", 00:04:07.938 "tpoint_mask": "0x0" 00:04:07.938 }, 00:04:07.938 "scsi": { 00:04:07.938 "mask": "0x4", 00:04:07.938 "tpoint_mask": "0x0" 00:04:07.938 }, 00:04:07.938 "bdev": { 00:04:07.938 "mask": "0x8", 00:04:07.938 "tpoint_mask": "0xffffffffffffffff" 00:04:07.938 }, 00:04:07.938 "nvmf_rdma": { 00:04:07.938 "mask": "0x10", 00:04:07.938 "tpoint_mask": "0x0" 00:04:07.938 }, 00:04:07.938 "nvmf_tcp": { 00:04:07.938 "mask": "0x20", 00:04:07.938 "tpoint_mask": "0x0" 00:04:07.938 }, 00:04:07.938 "ftl": { 00:04:07.938 "mask": "0x40", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "blobfs": { 00:04:07.939 "mask": "0x80", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "dsa": { 00:04:07.939 "mask": "0x200", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "thread": { 00:04:07.939 "mask": "0x400", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "nvme_pcie": { 00:04:07.939 "mask": "0x800", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "iaa": { 00:04:07.939 "mask": "0x1000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "nvme_tcp": { 00:04:07.939 "mask": "0x2000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "bdev_nvme": { 00:04:07.939 "mask": "0x4000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "sock": { 00:04:07.939 "mask": "0x8000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "blob": { 00:04:07.939 "mask": "0x10000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "bdev_raid": { 00:04:07.939 "mask": "0x20000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 }, 00:04:07.939 "scheduler": { 00:04:07.939 "mask": "0x40000", 00:04:07.939 "tpoint_mask": "0x0" 00:04:07.939 } 00:04:07.939 }' 00:04:07.939 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.939 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:07.939 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.196 00:04:08.196 real 0m0.196s 00:04:08.196 user 0m0.162s 00:04:08.196 sys 0m0.026s 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.196 11:27:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.196 ************************************ 00:04:08.196 END TEST rpc_trace_cmd_test 00:04:08.196 ************************************ 00:04:08.196 11:27:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.196 11:27:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.196 11:27:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.196 11:27:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.196 11:27:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.196 11:27:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.196 ************************************ 00:04:08.196 START TEST rpc_daemon_integrity 00:04:08.196 ************************************ 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.196 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.453 { 00:04:08.453 "name": "Malloc2", 00:04:08.453 "aliases": [ 00:04:08.453 "3bea8bb3-d930-48f0-9cdf-b7de9e875511" 00:04:08.453 ], 00:04:08.453 "product_name": "Malloc disk", 00:04:08.453 "block_size": 512, 00:04:08.453 "num_blocks": 16384, 00:04:08.453 "uuid": "3bea8bb3-d930-48f0-9cdf-b7de9e875511", 00:04:08.453 "assigned_rate_limits": { 00:04:08.453 "rw_ios_per_sec": 0, 00:04:08.453 "rw_mbytes_per_sec": 0, 00:04:08.453 "r_mbytes_per_sec": 0, 00:04:08.453 "w_mbytes_per_sec": 0 00:04:08.453 }, 00:04:08.453 "claimed": false, 00:04:08.453 "zoned": false, 00:04:08.453 "supported_io_types": { 00:04:08.453 "read": true, 00:04:08.453 "write": true, 00:04:08.453 "unmap": true, 00:04:08.453 "flush": true, 00:04:08.453 "reset": true, 00:04:08.453 "nvme_admin": false, 00:04:08.453 "nvme_io": false, 00:04:08.453 "nvme_io_md": false, 00:04:08.453 "write_zeroes": true, 00:04:08.453 "zcopy": true, 00:04:08.453 "get_zone_info": false, 00:04:08.453 "zone_management": false, 00:04:08.453 "zone_append": false, 00:04:08.453 "compare": false, 00:04:08.453 "compare_and_write": false, 00:04:08.453 "abort": true, 00:04:08.453 "seek_hole": false, 00:04:08.453 "seek_data": false, 00:04:08.453 "copy": true, 00:04:08.453 "nvme_iov_md": false 00:04:08.453 }, 00:04:08.453 "memory_domains": [ 00:04:08.453 { 00:04:08.453 "dma_device_id": "system", 00:04:08.453 "dma_device_type": 1 00:04:08.453 }, 00:04:08.453 { 00:04:08.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.453 "dma_device_type": 2 00:04:08.453 } 00:04:08.453 ], 00:04:08.453 "driver_specific": {} 00:04:08.453 } 00:04:08.453 ]' 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.453 [2024-11-20 11:27:11.748692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.453 [2024-11-20 11:27:11.748727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.453 [2024-11-20 11:27:11.748741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1438000 00:04:08.453 [2024-11-20 11:27:11.748750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.453 [2024-11-20 11:27:11.749758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.453 [2024-11-20 11:27:11.749782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.453 Passthru0 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.453 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.453 { 00:04:08.453 "name": "Malloc2", 00:04:08.453 "aliases": [ 00:04:08.453 "3bea8bb3-d930-48f0-9cdf-b7de9e875511" 00:04:08.453 ], 00:04:08.453 "product_name": "Malloc disk", 00:04:08.453 "block_size": 512, 00:04:08.453 "num_blocks": 16384, 00:04:08.453 "uuid": "3bea8bb3-d930-48f0-9cdf-b7de9e875511", 00:04:08.453 "assigned_rate_limits": { 00:04:08.453 "rw_ios_per_sec": 0, 00:04:08.453 "rw_mbytes_per_sec": 0, 00:04:08.453 "r_mbytes_per_sec": 0, 00:04:08.453 "w_mbytes_per_sec": 0 00:04:08.453 }, 00:04:08.453 "claimed": true, 00:04:08.453 "claim_type": "exclusive_write", 00:04:08.453 "zoned": false, 00:04:08.453 "supported_io_types": { 00:04:08.453 "read": true, 00:04:08.453 "write": true, 00:04:08.453 "unmap": true, 00:04:08.453 "flush": true, 00:04:08.453 "reset": true, 00:04:08.453 "nvme_admin": false, 00:04:08.453 "nvme_io": false, 00:04:08.453 "nvme_io_md": false, 00:04:08.453 "write_zeroes": true, 00:04:08.453 "zcopy": true, 00:04:08.453 "get_zone_info": false, 00:04:08.453 "zone_management": false, 00:04:08.454 "zone_append": false, 00:04:08.454 "compare": false, 00:04:08.454 "compare_and_write": false, 00:04:08.454 "abort": true, 00:04:08.454 "seek_hole": false, 00:04:08.454 "seek_data": false, 00:04:08.454 "copy": true, 00:04:08.454 "nvme_iov_md": false 00:04:08.454 }, 00:04:08.454 "memory_domains": [ 00:04:08.454 { 00:04:08.454 "dma_device_id": "system", 00:04:08.454 "dma_device_type": 1 00:04:08.454 }, 00:04:08.454 { 00:04:08.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.454 "dma_device_type": 2 00:04:08.454 } 00:04:08.454 ], 00:04:08.454 "driver_specific": {} 00:04:08.454 }, 00:04:08.454 { 00:04:08.454 "name": "Passthru0", 00:04:08.454 "aliases": [ 00:04:08.454 "88d53a8b-10fe-5254-b85f-d85e8417ebd4" 00:04:08.454 ], 00:04:08.454 "product_name": "passthru", 00:04:08.454 "block_size": 512, 00:04:08.454 "num_blocks": 16384, 00:04:08.454 "uuid": "88d53a8b-10fe-5254-b85f-d85e8417ebd4", 00:04:08.454 "assigned_rate_limits": { 00:04:08.454 "rw_ios_per_sec": 0, 00:04:08.454 "rw_mbytes_per_sec": 0, 00:04:08.454 "r_mbytes_per_sec": 0, 00:04:08.454 "w_mbytes_per_sec": 0 00:04:08.454 }, 00:04:08.454 "claimed": false, 00:04:08.454 "zoned": false, 00:04:08.454 "supported_io_types": { 00:04:08.454 "read": true, 00:04:08.454 "write": true, 00:04:08.454 "unmap": true, 00:04:08.454 "flush": true, 00:04:08.454 "reset": true, 00:04:08.454 "nvme_admin": false, 00:04:08.454 "nvme_io": false, 00:04:08.454 "nvme_io_md": false, 00:04:08.454 "write_zeroes": true, 00:04:08.454 "zcopy": true, 00:04:08.454 "get_zone_info": false, 00:04:08.454 "zone_management": false, 00:04:08.454 "zone_append": false, 00:04:08.454 "compare": false, 00:04:08.454 "compare_and_write": false, 00:04:08.454 "abort": true, 00:04:08.454 "seek_hole": false, 00:04:08.454 "seek_data": false, 00:04:08.454 "copy": true, 00:04:08.454 "nvme_iov_md": false 00:04:08.454 }, 00:04:08.454 "memory_domains": [ 00:04:08.454 { 00:04:08.454 "dma_device_id": "system", 00:04:08.454 "dma_device_type": 1 00:04:08.454 }, 00:04:08.454 { 00:04:08.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.454 "dma_device_type": 2 00:04:08.454 } 00:04:08.454 ], 00:04:08.454 "driver_specific": { 00:04:08.454 "passthru": { 00:04:08.454 "name": "Passthru0", 00:04:08.454 "base_bdev_name": "Malloc2" 00:04:08.454 } 00:04:08.454 } 00:04:08.454 } 00:04:08.454 ]' 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.454 00:04:08.454 real 0m0.257s 00:04:08.454 user 0m0.143s 00:04:08.454 sys 0m0.057s 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.454 11:27:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.454 ************************************ 00:04:08.454 END TEST rpc_daemon_integrity 00:04:08.454 ************************************ 00:04:08.454 11:27:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.454 11:27:11 rpc -- rpc/rpc.sh@84 -- # killprocess 1485274 00:04:08.454 11:27:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 1485274 ']' 00:04:08.454 11:27:11 rpc -- common/autotest_common.sh@958 -- # kill -0 1485274 00:04:08.454 11:27:11 rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.454 11:27:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.454 11:27:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485274 00:04:08.711 11:27:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.711 11:27:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.711 11:27:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485274' 00:04:08.711 killing process with pid 1485274 00:04:08.711 11:27:11 rpc -- common/autotest_common.sh@973 -- # kill 1485274 00:04:08.711 11:27:11 rpc -- common/autotest_common.sh@978 -- # wait 1485274 00:04:08.968 00:04:08.968 real 0m2.184s 00:04:08.968 user 0m2.631s 00:04:08.968 sys 0m0.838s 00:04:08.968 11:27:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.968 11:27:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.968 ************************************ 00:04:08.968 END TEST rpc 00:04:08.968 ************************************ 00:04:08.968 11:27:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:08.968 11:27:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.968 11:27:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.968 11:27:12 -- common/autotest_common.sh@10 -- # set +x 00:04:08.968 ************************************ 00:04:08.968 START TEST skip_rpc 00:04:08.968 ************************************ 00:04:08.968 11:27:12 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.226 * Looking for test storage... 00:04:09.226 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:09.226 11:27:12 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.226 11:27:12 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.226 11:27:12 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:09.226 11:27:12 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.226 11:27:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.227 11:27:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.227 --rc genhtml_branch_coverage=1 00:04:09.227 --rc genhtml_function_coverage=1 00:04:09.227 --rc genhtml_legend=1 00:04:09.227 --rc geninfo_all_blocks=1 00:04:09.227 --rc geninfo_unexecuted_blocks=1 00:04:09.227 00:04:09.227 ' 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.227 --rc genhtml_branch_coverage=1 00:04:09.227 --rc genhtml_function_coverage=1 00:04:09.227 --rc genhtml_legend=1 00:04:09.227 --rc geninfo_all_blocks=1 00:04:09.227 --rc geninfo_unexecuted_blocks=1 00:04:09.227 00:04:09.227 ' 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.227 --rc genhtml_branch_coverage=1 00:04:09.227 --rc genhtml_function_coverage=1 00:04:09.227 --rc genhtml_legend=1 00:04:09.227 --rc geninfo_all_blocks=1 00:04:09.227 --rc geninfo_unexecuted_blocks=1 00:04:09.227 00:04:09.227 ' 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:09.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.227 --rc genhtml_branch_coverage=1 00:04:09.227 --rc genhtml_function_coverage=1 00:04:09.227 --rc genhtml_legend=1 00:04:09.227 --rc geninfo_all_blocks=1 00:04:09.227 --rc geninfo_unexecuted_blocks=1 00:04:09.227 00:04:09.227 ' 00:04:09.227 11:27:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:09.227 11:27:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:09.227 11:27:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.227 11:27:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.227 ************************************ 00:04:09.227 START TEST skip_rpc 00:04:09.227 ************************************ 00:04:09.227 11:27:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:09.227 11:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1485814 00:04:09.227 11:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:09.227 11:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.227 11:27:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:09.227 [2024-11-20 11:27:12.685817] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:09.227 [2024-11-20 11:27:12.685861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485814 ] 00:04:09.485 [2024-11-20 11:27:12.761722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.485 [2024-11-20 11:27:12.808270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1485814 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1485814 ']' 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1485814 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485814 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485814' 00:04:14.747 killing process with pid 1485814 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1485814 00:04:14.747 11:27:17 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1485814 00:04:14.747 00:04:14.747 real 0m5.417s 00:04:14.747 user 0m5.136s 00:04:14.747 sys 0m0.317s 00:04:14.747 11:27:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.747 11:27:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.747 ************************************ 00:04:14.747 END TEST skip_rpc 00:04:14.747 ************************************ 00:04:14.747 11:27:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.747 11:27:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.747 11:27:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.747 11:27:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.747 ************************************ 00:04:14.747 START TEST skip_rpc_with_json 00:04:14.747 ************************************ 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1486574 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1486574 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1486574 ']' 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.747 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.747 [2024-11-20 11:27:18.192235] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:14.748 [2024-11-20 11:27:18.192294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486574 ] 00:04:15.006 [2024-11-20 11:27:18.268862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.006 [2024-11-20 11:27:18.313689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.265 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.265 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:15.265 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.265 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.265 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.265 [2024-11-20 11:27:18.545286] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.265 request: 00:04:15.266 { 00:04:15.266 "trtype": "tcp", 00:04:15.266 "method": "nvmf_get_transports", 00:04:15.266 "req_id": 1 00:04:15.266 } 00:04:15.266 Got JSON-RPC error response 00:04:15.266 response: 00:04:15.266 { 00:04:15.266 "code": -19, 00:04:15.266 "message": "No such device" 00:04:15.266 } 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.266 [2024-11-20 11:27:18.557397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:15.266 { 00:04:15.266 "subsystems": [ 00:04:15.266 { 00:04:15.266 "subsystem": "fsdev", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "fsdev_set_opts", 00:04:15.266 "params": { 00:04:15.266 "fsdev_io_pool_size": 65535, 00:04:15.266 "fsdev_io_cache_size": 256 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "keyring", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "iobuf", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "iobuf_set_options", 00:04:15.266 "params": { 00:04:15.266 "small_pool_count": 8192, 00:04:15.266 "large_pool_count": 1024, 00:04:15.266 "small_bufsize": 8192, 00:04:15.266 "large_bufsize": 135168, 00:04:15.266 "enable_numa": false 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "sock", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "sock_set_default_impl", 00:04:15.266 "params": { 00:04:15.266 "impl_name": "posix" 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "sock_impl_set_options", 00:04:15.266 "params": { 00:04:15.266 "impl_name": "ssl", 00:04:15.266 "recv_buf_size": 4096, 00:04:15.266 "send_buf_size": 4096, 00:04:15.266 "enable_recv_pipe": true, 00:04:15.266 "enable_quickack": false, 00:04:15.266 "enable_placement_id": 0, 00:04:15.266 "enable_zerocopy_send_server": true, 00:04:15.266 "enable_zerocopy_send_client": false, 00:04:15.266 "zerocopy_threshold": 0, 00:04:15.266 "tls_version": 0, 00:04:15.266 "enable_ktls": false 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "sock_impl_set_options", 00:04:15.266 "params": { 00:04:15.266 "impl_name": "posix", 00:04:15.266 "recv_buf_size": 2097152, 00:04:15.266 "send_buf_size": 2097152, 00:04:15.266 "enable_recv_pipe": true, 00:04:15.266 "enable_quickack": false, 00:04:15.266 "enable_placement_id": 0, 00:04:15.266 "enable_zerocopy_send_server": true, 00:04:15.266 "enable_zerocopy_send_client": false, 00:04:15.266 "zerocopy_threshold": 0, 00:04:15.266 "tls_version": 0, 00:04:15.266 "enable_ktls": false 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "vmd", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "accel", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "accel_set_options", 00:04:15.266 "params": { 00:04:15.266 "small_cache_size": 128, 00:04:15.266 "large_cache_size": 16, 00:04:15.266 "task_count": 2048, 00:04:15.266 "sequence_count": 2048, 00:04:15.266 "buf_count": 2048 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "bdev", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "bdev_set_options", 00:04:15.266 "params": { 00:04:15.266 "bdev_io_pool_size": 65535, 00:04:15.266 "bdev_io_cache_size": 256, 00:04:15.266 "bdev_auto_examine": true, 00:04:15.266 "iobuf_small_cache_size": 128, 00:04:15.266 "iobuf_large_cache_size": 16 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "bdev_raid_set_options", 00:04:15.266 "params": { 00:04:15.266 "process_window_size_kb": 1024, 00:04:15.266 "process_max_bandwidth_mb_sec": 0 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "bdev_iscsi_set_options", 00:04:15.266 "params": { 00:04:15.266 "timeout_sec": 30 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "bdev_nvme_set_options", 00:04:15.266 "params": { 00:04:15.266 "action_on_timeout": "none", 00:04:15.266 "timeout_us": 0, 00:04:15.266 "timeout_admin_us": 0, 00:04:15.266 "keep_alive_timeout_ms": 10000, 00:04:15.266 "arbitration_burst": 0, 00:04:15.266 "low_priority_weight": 0, 00:04:15.266 "medium_priority_weight": 0, 00:04:15.266 "high_priority_weight": 0, 00:04:15.266 "nvme_adminq_poll_period_us": 10000, 00:04:15.266 "nvme_ioq_poll_period_us": 0, 00:04:15.266 "io_queue_requests": 0, 00:04:15.266 "delay_cmd_submit": true, 00:04:15.266 "transport_retry_count": 4, 00:04:15.266 "bdev_retry_count": 3, 00:04:15.266 "transport_ack_timeout": 0, 00:04:15.266 "ctrlr_loss_timeout_sec": 0, 00:04:15.266 "reconnect_delay_sec": 0, 00:04:15.266 "fast_io_fail_timeout_sec": 0, 00:04:15.266 "disable_auto_failback": false, 00:04:15.266 "generate_uuids": false, 00:04:15.266 "transport_tos": 0, 00:04:15.266 "nvme_error_stat": false, 00:04:15.266 "rdma_srq_size": 0, 00:04:15.266 "io_path_stat": false, 00:04:15.266 "allow_accel_sequence": false, 00:04:15.266 "rdma_max_cq_size": 0, 00:04:15.266 "rdma_cm_event_timeout_ms": 0, 00:04:15.266 "dhchap_digests": [ 00:04:15.266 "sha256", 00:04:15.266 "sha384", 00:04:15.266 "sha512" 00:04:15.266 ], 00:04:15.266 "dhchap_dhgroups": [ 00:04:15.266 "null", 00:04:15.266 "ffdhe2048", 00:04:15.266 "ffdhe3072", 00:04:15.266 "ffdhe4096", 00:04:15.266 "ffdhe6144", 00:04:15.266 "ffdhe8192" 00:04:15.266 ] 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "bdev_nvme_set_hotplug", 00:04:15.266 "params": { 00:04:15.266 "period_us": 100000, 00:04:15.266 "enable": false 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "bdev_wait_for_examine" 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "scsi", 00:04:15.266 "config": null 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "scheduler", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "framework_set_scheduler", 00:04:15.266 "params": { 00:04:15.266 "name": "static" 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "vhost_scsi", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "vhost_blk", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "ublk", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "nbd", 00:04:15.266 "config": [] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "nvmf", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "nvmf_set_config", 00:04:15.266 "params": { 00:04:15.266 "discovery_filter": "match_any", 00:04:15.266 "admin_cmd_passthru": { 00:04:15.266 "identify_ctrlr": false 00:04:15.266 }, 00:04:15.266 "dhchap_digests": [ 00:04:15.266 "sha256", 00:04:15.266 "sha384", 00:04:15.266 "sha512" 00:04:15.266 ], 00:04:15.266 "dhchap_dhgroups": [ 00:04:15.266 "null", 00:04:15.266 "ffdhe2048", 00:04:15.266 "ffdhe3072", 00:04:15.266 "ffdhe4096", 00:04:15.266 "ffdhe6144", 00:04:15.266 "ffdhe8192" 00:04:15.266 ] 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "nvmf_set_max_subsystems", 00:04:15.266 "params": { 00:04:15.266 "max_subsystems": 1024 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "nvmf_set_crdt", 00:04:15.266 "params": { 00:04:15.266 "crdt1": 0, 00:04:15.266 "crdt2": 0, 00:04:15.266 "crdt3": 0 00:04:15.266 } 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "method": "nvmf_create_transport", 00:04:15.266 "params": { 00:04:15.266 "trtype": "TCP", 00:04:15.266 "max_queue_depth": 128, 00:04:15.266 "max_io_qpairs_per_ctrlr": 127, 00:04:15.266 "in_capsule_data_size": 4096, 00:04:15.266 "max_io_size": 131072, 00:04:15.266 "io_unit_size": 131072, 00:04:15.266 "max_aq_depth": 128, 00:04:15.266 "num_shared_buffers": 511, 00:04:15.266 "buf_cache_size": 4294967295, 00:04:15.266 "dif_insert_or_strip": false, 00:04:15.266 "zcopy": false, 00:04:15.266 "c2h_success": true, 00:04:15.266 "sock_priority": 0, 00:04:15.266 "abort_timeout_sec": 1, 00:04:15.266 "ack_timeout": 0, 00:04:15.266 "data_wr_pool_size": 0 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 }, 00:04:15.266 { 00:04:15.266 "subsystem": "iscsi", 00:04:15.266 "config": [ 00:04:15.266 { 00:04:15.266 "method": "iscsi_set_options", 00:04:15.266 "params": { 00:04:15.266 "node_base": "iqn.2016-06.io.spdk", 00:04:15.266 "max_sessions": 128, 00:04:15.266 "max_connections_per_session": 2, 00:04:15.266 "max_queue_depth": 64, 00:04:15.266 "default_time2wait": 2, 00:04:15.266 "default_time2retain": 20, 00:04:15.266 "first_burst_length": 8192, 00:04:15.266 "immediate_data": true, 00:04:15.266 "allow_duplicated_isid": false, 00:04:15.266 "error_recovery_level": 0, 00:04:15.266 "nop_timeout": 60, 00:04:15.266 "nop_in_interval": 30, 00:04:15.266 "disable_chap": false, 00:04:15.266 "require_chap": false, 00:04:15.266 "mutual_chap": false, 00:04:15.266 "chap_group": 0, 00:04:15.266 "max_large_datain_per_connection": 64, 00:04:15.266 "max_r2t_per_connection": 4, 00:04:15.266 "pdu_pool_size": 36864, 00:04:15.266 "immediate_data_pool_size": 16384, 00:04:15.266 "data_out_pool_size": 2048 00:04:15.266 } 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 } 00:04:15.266 ] 00:04:15.266 } 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1486574 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1486574 ']' 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1486574 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:15.266 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486574 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486574' 00:04:15.524 killing process with pid 1486574 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1486574 00:04:15.524 11:27:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1486574 00:04:15.782 11:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1486756 00:04:15.782 11:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:15.782 11:27:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1486756 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1486756 ']' 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1486756 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486756 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486756' 00:04:21.046 killing process with pid 1486756 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1486756 00:04:21.046 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1486756 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:21.304 00:04:21.304 real 0m6.406s 00:04:21.304 user 0m6.019s 00:04:21.304 sys 0m0.691s 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.304 ************************************ 00:04:21.304 END TEST skip_rpc_with_json 00:04:21.304 ************************************ 00:04:21.304 11:27:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.304 11:27:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.304 11:27:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.304 11:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.304 ************************************ 00:04:21.304 START TEST skip_rpc_with_delay 00:04:21.304 ************************************ 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.304 [2024-11-20 11:27:24.679745] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.304 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:21.305 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.305 00:04:21.305 real 0m0.071s 00:04:21.305 user 0m0.036s 00:04:21.305 sys 0m0.035s 00:04:21.305 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.305 11:27:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.305 ************************************ 00:04:21.305 END TEST skip_rpc_with_delay 00:04:21.305 ************************************ 00:04:21.305 11:27:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.305 11:27:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.305 11:27:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.305 11:27:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.305 11:27:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.305 11:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.305 ************************************ 00:04:21.305 START TEST exit_on_failed_rpc_init 00:04:21.305 ************************************ 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1487540 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1487540 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1487540 ']' 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.305 11:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 [2024-11-20 11:27:24.831327] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:21.564 [2024-11-20 11:27:24.831372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487540 ] 00:04:21.564 [2024-11-20 11:27:24.907507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.564 [2024-11-20 11:27:24.952108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.498 [2024-11-20 11:27:25.720614] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:22.498 [2024-11-20 11:27:25.720678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487656 ] 00:04:22.498 [2024-11-20 11:27:25.798221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.498 [2024-11-20 11:27:25.842736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.498 [2024-11-20 11:27:25.842804] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:22.498 [2024-11-20 11:27:25.842815] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:22.498 [2024-11-20 11:27:25.842824] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1487540 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1487540 ']' 00:04:22.498 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1487540 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487540 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487540' 00:04:22.499 killing process with pid 1487540 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1487540 00:04:22.499 11:27:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1487540 00:04:23.065 00:04:23.065 real 0m1.512s 00:04:23.065 user 0m1.688s 00:04:23.065 sys 0m0.471s 00:04:23.065 11:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.065 11:27:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.065 ************************************ 00:04:23.065 END TEST exit_on_failed_rpc_init 00:04:23.065 ************************************ 00:04:23.065 11:27:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:23.065 00:04:23.065 real 0m13.932s 00:04:23.065 user 0m13.096s 00:04:23.065 sys 0m1.864s 00:04:23.065 11:27:26 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.065 11:27:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.065 ************************************ 00:04:23.065 END TEST skip_rpc 00:04:23.065 ************************************ 00:04:23.065 11:27:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:23.065 11:27:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.065 11:27:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.065 11:27:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.065 ************************************ 00:04:23.065 START TEST rpc_client 00:04:23.065 ************************************ 00:04:23.065 11:27:26 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:23.065 * Looking for test storage... 00:04:23.065 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:23.065 11:27:26 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.065 11:27:26 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.065 11:27:26 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.324 11:27:26 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.324 --rc genhtml_branch_coverage=1 00:04:23.324 --rc genhtml_function_coverage=1 00:04:23.324 --rc genhtml_legend=1 00:04:23.324 --rc geninfo_all_blocks=1 00:04:23.324 --rc geninfo_unexecuted_blocks=1 00:04:23.324 00:04:23.324 ' 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.324 --rc genhtml_branch_coverage=1 00:04:23.324 --rc genhtml_function_coverage=1 00:04:23.324 --rc genhtml_legend=1 00:04:23.324 --rc geninfo_all_blocks=1 00:04:23.324 --rc geninfo_unexecuted_blocks=1 00:04:23.324 00:04:23.324 ' 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.324 --rc genhtml_branch_coverage=1 00:04:23.324 --rc genhtml_function_coverage=1 00:04:23.324 --rc genhtml_legend=1 00:04:23.324 --rc geninfo_all_blocks=1 00:04:23.324 --rc geninfo_unexecuted_blocks=1 00:04:23.324 00:04:23.324 ' 00:04:23.324 11:27:26 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.324 --rc genhtml_branch_coverage=1 00:04:23.324 --rc genhtml_function_coverage=1 00:04:23.324 --rc genhtml_legend=1 00:04:23.324 --rc geninfo_all_blocks=1 00:04:23.324 --rc geninfo_unexecuted_blocks=1 00:04:23.324 00:04:23.324 ' 00:04:23.324 11:27:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:23.324 OK 00:04:23.324 11:27:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:23.324 00:04:23.325 real 0m0.211s 00:04:23.325 user 0m0.102s 00:04:23.325 sys 0m0.125s 00:04:23.325 11:27:26 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.325 11:27:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:23.325 ************************************ 00:04:23.325 END TEST rpc_client 00:04:23.325 ************************************ 00:04:23.325 11:27:26 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:23.325 11:27:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.325 11:27:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.325 11:27:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.325 ************************************ 00:04:23.325 START TEST json_config 00:04:23.325 ************************************ 00:04:23.325 11:27:26 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:23.325 11:27:26 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.325 11:27:26 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.325 11:27:26 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.584 11:27:26 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.584 11:27:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.584 11:27:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.584 11:27:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.584 11:27:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.584 11:27:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.584 11:27:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:23.584 11:27:26 json_config -- scripts/common.sh@345 -- # : 1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.584 11:27:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.584 11:27:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@353 -- # local d=1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.584 11:27:26 json_config -- scripts/common.sh@355 -- # echo 1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.584 11:27:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@353 -- # local d=2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.584 11:27:26 json_config -- scripts/common.sh@355 -- # echo 2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.584 11:27:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.584 11:27:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.584 11:27:26 json_config -- scripts/common.sh@368 -- # return 0 00:04:23.584 11:27:26 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.584 11:27:26 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.584 --rc genhtml_branch_coverage=1 00:04:23.584 --rc genhtml_function_coverage=1 00:04:23.584 --rc genhtml_legend=1 00:04:23.584 --rc geninfo_all_blocks=1 00:04:23.584 --rc geninfo_unexecuted_blocks=1 00:04:23.584 00:04:23.584 ' 00:04:23.584 11:27:26 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.584 --rc genhtml_branch_coverage=1 00:04:23.585 --rc genhtml_function_coverage=1 00:04:23.585 --rc genhtml_legend=1 00:04:23.585 --rc geninfo_all_blocks=1 00:04:23.585 --rc geninfo_unexecuted_blocks=1 00:04:23.585 00:04:23.585 ' 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.585 --rc genhtml_branch_coverage=1 00:04:23.585 --rc genhtml_function_coverage=1 00:04:23.585 --rc genhtml_legend=1 00:04:23.585 --rc geninfo_all_blocks=1 00:04:23.585 --rc geninfo_unexecuted_blocks=1 00:04:23.585 00:04:23.585 ' 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.585 --rc genhtml_branch_coverage=1 00:04:23.585 --rc genhtml_function_coverage=1 00:04:23.585 --rc genhtml_legend=1 00:04:23.585 --rc geninfo_all_blocks=1 00:04:23.585 --rc geninfo_unexecuted_blocks=1 00:04:23.585 00:04:23.585 ' 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:23.585 11:27:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.585 11:27:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.585 11:27:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.585 11:27:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.585 11:27:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.585 11:27:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.585 11:27:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.585 11:27:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:23.585 11:27:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:04:23.585 11:27:26 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:23.585 11:27:26 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:23.585 11:27:26 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@50 -- # : 0 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:23.585 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:23.585 11:27:26 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:23.585 INFO: JSON configuration test init 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.585 11:27:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:23.585 11:27:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.585 11:27:26 json_config -- json_config/common.sh@10 -- # shift 00:04:23.585 11:27:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.585 11:27:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.585 11:27:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.585 11:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.585 11:27:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.585 11:27:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1487885 00:04:23.585 11:27:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.585 Waiting for target to run... 00:04:23.585 11:27:26 json_config -- json_config/common.sh@25 -- # waitforlisten 1487885 /var/tmp/spdk_tgt.sock 00:04:23.585 11:27:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 1487885 ']' 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.585 11:27:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.585 [2024-11-20 11:27:26.919630] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:23.585 [2024-11-20 11:27:26.919686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487885 ] 00:04:23.844 [2024-11-20 11:27:27.242002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.844 [2024-11-20 11:27:27.279353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:24.411 11:27:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.411 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.411 11:27:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:24.411 11:27:27 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:24.411 11:27:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:27.693 11:27:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.693 11:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:27.693 11:27:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:27.693 11:27:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:27.693 11:27:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:27.693 11:27:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.693 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:27.951 11:27:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.951 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:27.951 11:27:31 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@296 -- # prepare_net_devs 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@258 -- # local -g is_hw=no 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@260 -- # remove_target_ns 00:04:27.951 11:27:31 json_config -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:04:27.951 11:27:31 json_config -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:04:27.951 11:27:31 json_config -- common/autotest_common.sh@22 -- # _remove_target_ns 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@262 -- # [[ phy-fallback != virt ]] 00:04:27.951 11:27:31 json_config -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:04:27.952 11:27:31 json_config -- nvmf/common.sh@125 -- # xtrace_disable 00:04:27.952 11:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@131 -- # pci_devs=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@131 -- # local -a pci_devs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@132 -- # pci_net_devs=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@133 -- # pci_drivers=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@133 -- # local -A pci_drivers 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@135 -- # net_devs=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@135 -- # local -ga net_devs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@136 -- # e810=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@136 -- # local -ga e810 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@137 -- # x722=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@137 -- # local -ga x722 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@138 -- # mlx=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@138 -- # local -ga mlx 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:04:34.509 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:04:34.509 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:04:34.509 Found net devices under 0000:18:00.0: mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:04:34.509 Found net devices under 0000:18:00.1: mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@253 -- # get_rdma_if_list 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@75 -- # rdma_devs=() 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@89 -- # continue 2 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@89 -- # continue 2 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@262 -- # is_hw=yes 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@61 -- # uname 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@65 -- # modprobe ib_cm 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@66 -- # modprobe ib_core 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@67 -- # modprobe ib_umad 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@69 -- # modprobe iw_cm 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@27 -- # local -gA dev_map 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@28 -- # local -g _dev 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@44 -- # ips=() 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@58 -- # key_initiator=target1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@11 -- # local val=167772161 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:04:34.509 10.0.0.1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@11 -- # local val=167772162 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:04:34.509 10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@38 -- # ping_ips 1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:04:34.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:34.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:04:34.509 00:04:34.509 --- 10.0.0.2 ping statistics --- 00:04:34.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:34.509 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:04:34.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:34.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:04:34.509 00:04:34.509 --- 10.0.0.2 ping statistics --- 00:04:34.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:34.509 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@98 -- # (( pair++ )) 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@270 -- # return 0 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:04:34.509 11:27:37 json_config -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target0 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:04:34.509 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # get_net_dev target1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@107 -- # local dev=target1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:04:34.510 11:27:37 json_config -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:04:34.510 11:27:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 10.0.0.2 ]] 00:04:34.510 11:27:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.510 11:27:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:34.768 MallocForNvmf0 00:04:34.768 11:27:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:34.768 11:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.024 MallocForNvmf1 00:04:35.024 11:27:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:35.024 11:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:35.282 [2024-11-20 11:27:38.506350] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:35.282 [2024-11-20 11:27:38.537225] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12571b0/0x112b8b0) succeed. 00:04:35.282 [2024-11-20 11:27:38.550763] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1256150/0x11ab540) succeed. 00:04:35.282 11:27:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.282 11:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.540 11:27:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.540 11:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.540 11:27:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.540 11:27:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:35.805 11:27:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:04:35.805 11:27:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:04:36.068 [2024-11-20 11:27:39.352459] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:04:36.068 11:27:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:36.068 11:27:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.068 11:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.068 11:27:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:36.068 11:27:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.068 11:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.068 11:27:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:36.068 11:27:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.068 11:27:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.326 MallocBdevForConfigChangeCheck 00:04:36.326 11:27:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:36.326 11:27:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.326 11:27:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.326 11:27:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:36.326 11:27:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.584 11:27:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:36.584 INFO: shutting down applications... 00:04:36.584 11:27:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:36.584 11:27:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:36.584 11:27:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:36.584 11:27:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:40.765 Calling clear_iscsi_subsystem 00:04:40.765 Calling clear_nvmf_subsystem 00:04:40.765 Calling clear_nbd_subsystem 00:04:40.765 Calling clear_ublk_subsystem 00:04:40.765 Calling clear_vhost_blk_subsystem 00:04:40.765 Calling clear_vhost_scsi_subsystem 00:04:40.765 Calling clear_bdev_subsystem 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:40.765 11:27:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.024 11:27:44 json_config -- json_config/json_config.sh@352 -- # break 00:04:41.024 11:27:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:41.024 11:27:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:41.024 11:27:44 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.024 11:27:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.024 11:27:44 json_config -- json_config/common.sh@35 -- # [[ -n 1487885 ]] 00:04:41.024 11:27:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1487885 00:04:41.024 11:27:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.024 11:27:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.024 11:27:44 json_config -- json_config/common.sh@41 -- # kill -0 1487885 00:04:41.024 11:27:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.590 11:27:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.590 11:27:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.590 11:27:44 json_config -- json_config/common.sh@41 -- # kill -0 1487885 00:04:41.590 11:27:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.590 11:27:44 json_config -- json_config/common.sh@43 -- # break 00:04:41.590 11:27:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.590 11:27:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.590 SPDK target shutdown done 00:04:41.590 11:27:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:41.590 INFO: relaunching applications... 00:04:41.590 11:27:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.590 11:27:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.590 11:27:44 json_config -- json_config/common.sh@10 -- # shift 00:04:41.590 11:27:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.590 11:27:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.590 11:27:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.590 11:27:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.590 11:27:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.590 11:27:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1492402 00:04:41.590 11:27:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.590 Waiting for target to run... 00:04:41.590 11:27:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.590 11:27:44 json_config -- json_config/common.sh@25 -- # waitforlisten 1492402 /var/tmp/spdk_tgt.sock 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 1492402 ']' 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.590 11:27:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.590 [2024-11-20 11:27:44.845274] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:41.590 [2024-11-20 11:27:44.845346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492402 ] 00:04:42.219 [2024-11-20 11:27:45.417970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.219 [2024-11-20 11:27:45.478898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.501 [2024-11-20 11:27:48.540440] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21eee00/0x2188020) succeed. 00:04:45.501 [2024-11-20 11:27:48.552108] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21f0ff0/0x221d0c0) succeed. 00:04:45.501 [2024-11-20 11:27:48.603672] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:04:45.760 11:27:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.760 11:27:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:45.760 11:27:49 json_config -- json_config/common.sh@26 -- # echo '' 00:04:45.760 00:04:45.760 11:27:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:45.760 11:27:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:45.760 INFO: Checking if target configuration is the same... 00:04:45.760 11:27:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.760 11:27:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:45.760 11:27:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.760 + '[' 2 -ne 2 ']' 00:04:45.760 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:45.760 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:45.760 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:45.760 +++ basename /dev/fd/62 00:04:45.760 ++ mktemp /tmp/62.XXX 00:04:45.760 + tmp_file_1=/tmp/62.6Pv 00:04:45.760 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.760 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.760 + tmp_file_2=/tmp/spdk_tgt_config.json.mGy 00:04:45.760 + ret=0 00:04:45.760 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.018 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.018 + diff -u /tmp/62.6Pv /tmp/spdk_tgt_config.json.mGy 00:04:46.018 + echo 'INFO: JSON config files are the same' 00:04:46.018 INFO: JSON config files are the same 00:04:46.018 + rm /tmp/62.6Pv /tmp/spdk_tgt_config.json.mGy 00:04:46.018 + exit 0 00:04:46.018 11:27:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:46.018 11:27:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:46.018 INFO: changing configuration and checking if this can be detected... 00:04:46.018 11:27:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.018 11:27:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.276 11:27:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.276 11:27:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:46.276 11:27:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.276 + '[' 2 -ne 2 ']' 00:04:46.276 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.276 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:46.276 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:46.276 +++ basename /dev/fd/62 00:04:46.276 ++ mktemp /tmp/62.XXX 00:04:46.276 + tmp_file_1=/tmp/62.dKf 00:04:46.276 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.276 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.276 + tmp_file_2=/tmp/spdk_tgt_config.json.vEy 00:04:46.276 + ret=0 00:04:46.276 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.533 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.791 + diff -u /tmp/62.dKf /tmp/spdk_tgt_config.json.vEy 00:04:46.791 + ret=1 00:04:46.791 + echo '=== Start of file: /tmp/62.dKf ===' 00:04:46.791 + cat /tmp/62.dKf 00:04:46.791 + echo '=== End of file: /tmp/62.dKf ===' 00:04:46.791 + echo '' 00:04:46.791 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vEy ===' 00:04:46.791 + cat /tmp/spdk_tgt_config.json.vEy 00:04:46.791 + echo '=== End of file: /tmp/spdk_tgt_config.json.vEy ===' 00:04:46.791 + echo '' 00:04:46.791 + rm /tmp/62.dKf /tmp/spdk_tgt_config.json.vEy 00:04:46.791 + exit 1 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:46.791 INFO: configuration change detected. 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 1492402 ]] 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.791 11:27:50 json_config -- json_config/json_config.sh@330 -- # killprocess 1492402 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 1492402 ']' 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@958 -- # kill -0 1492402 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@959 -- # uname 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1492402 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1492402' 00:04:46.791 killing process with pid 1492402 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@973 -- # kill 1492402 00:04:46.791 11:27:50 json_config -- common/autotest_common.sh@978 -- # wait 1492402 00:04:50.969 11:27:54 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.969 11:27:54 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:50.969 11:27:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.969 11:27:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.969 11:27:54 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:50.969 11:27:54 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:50.969 INFO: Success 00:04:50.969 11:27:54 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@335 -- # nvmfcleanup 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@99 -- # sync 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@101 -- # '[' '' == tcp ']' 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@101 -- # '[' '' == rdma ']' 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:04:50.969 11:27:54 json_config -- nvmf/common.sh@342 -- # nvmf_fini 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@264 -- # local dev 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@267 -- # remove_target_ns 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:04:50.969 11:27:54 json_config -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:04:50.969 11:27:54 json_config -- common/autotest_common.sh@22 -- # _remove_target_ns 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@268 -- # delete_main_bridge 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@130 -- # return 0 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:04:50.969 11:27:54 json_config -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@41 -- # _dev=0 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@41 -- # dev_map=() 00:04:50.970 11:27:54 json_config -- nvmf/setup.sh@284 -- # iptr 00:04:50.970 11:27:54 json_config -- nvmf/common.sh@542 -- # iptables-save 00:04:50.970 11:27:54 json_config -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:04:50.970 11:27:54 json_config -- nvmf/common.sh@542 -- # iptables-restore 00:04:50.970 00:04:50.970 real 0m27.401s 00:04:50.970 user 0m29.529s 00:04:50.970 sys 0m8.074s 00:04:50.970 11:27:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.970 11:27:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.970 ************************************ 00:04:50.970 END TEST json_config 00:04:50.970 ************************************ 00:04:50.970 11:27:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.970 11:27:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.970 11:27:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.970 11:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.970 ************************************ 00:04:50.970 START TEST json_config_extra_key 00:04:50.970 ************************************ 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.970 --rc genhtml_branch_coverage=1 00:04:50.970 --rc genhtml_function_coverage=1 00:04:50.970 --rc genhtml_legend=1 00:04:50.970 --rc geninfo_all_blocks=1 00:04:50.970 --rc geninfo_unexecuted_blocks=1 00:04:50.970 00:04:50.970 ' 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.970 --rc genhtml_branch_coverage=1 00:04:50.970 --rc genhtml_function_coverage=1 00:04:50.970 --rc genhtml_legend=1 00:04:50.970 --rc geninfo_all_blocks=1 00:04:50.970 --rc geninfo_unexecuted_blocks=1 00:04:50.970 00:04:50.970 ' 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.970 --rc genhtml_branch_coverage=1 00:04:50.970 --rc genhtml_function_coverage=1 00:04:50.970 --rc genhtml_legend=1 00:04:50.970 --rc geninfo_all_blocks=1 00:04:50.970 --rc geninfo_unexecuted_blocks=1 00:04:50.970 00:04:50.970 ' 00:04:50.970 11:27:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.970 --rc genhtml_branch_coverage=1 00:04:50.970 --rc genhtml_function_coverage=1 00:04:50.970 --rc genhtml_legend=1 00:04:50.970 --rc geninfo_all_blocks=1 00:04:50.970 --rc geninfo_unexecuted_blocks=1 00:04:50.970 00:04:50.970 ' 00:04:50.970 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.970 11:27:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.970 11:27:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.970 11:27:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.970 11:27:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.970 11:27:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.970 11:27:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:50.970 11:27:54 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:50.971 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:50.971 11:27:54 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.971 INFO: launching applications... 00:04:50.971 11:27:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1493877 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.971 Waiting for target to run... 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1493877 /var/tmp/spdk_tgt.sock 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1493877 ']' 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.971 11:27:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.971 11:27:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.971 [2024-11-20 11:27:54.406649] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:50.971 [2024-11-20 11:27:54.406706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493877 ] 00:04:51.229 [2024-11-20 11:27:54.696471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.487 [2024-11-20 11:27:54.736260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.053 11:27:55 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.053 11:27:55 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.053 00:04:52.053 11:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.053 INFO: shutting down applications... 00:04:52.053 11:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1493877 ]] 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1493877 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1493877 00:04:52.053 11:27:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1493877 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.312 11:27:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.312 SPDK target shutdown done 00:04:52.312 11:27:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.312 Success 00:04:52.312 00:04:52.312 real 0m1.562s 00:04:52.312 user 0m1.346s 00:04:52.312 sys 0m0.447s 00:04:52.312 11:27:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.312 11:27:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.312 ************************************ 00:04:52.312 END TEST json_config_extra_key 00:04:52.312 ************************************ 00:04:52.312 11:27:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.312 11:27:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.312 11:27:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.312 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.571 ************************************ 00:04:52.571 START TEST alias_rpc 00:04:52.571 ************************************ 00:04:52.571 11:27:55 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.571 * Looking for test storage... 00:04:52.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:52.571 11:27:55 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.571 11:27:55 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.571 11:27:55 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.571 11:27:55 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.571 11:27:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.571 11:27:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.571 --rc genhtml_branch_coverage=1 00:04:52.571 --rc genhtml_function_coverage=1 00:04:52.571 --rc genhtml_legend=1 00:04:52.571 --rc geninfo_all_blocks=1 00:04:52.571 --rc geninfo_unexecuted_blocks=1 00:04:52.571 00:04:52.571 ' 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.571 --rc genhtml_branch_coverage=1 00:04:52.571 --rc genhtml_function_coverage=1 00:04:52.571 --rc genhtml_legend=1 00:04:52.571 --rc geninfo_all_blocks=1 00:04:52.571 --rc geninfo_unexecuted_blocks=1 00:04:52.571 00:04:52.571 ' 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.571 --rc genhtml_branch_coverage=1 00:04:52.571 --rc genhtml_function_coverage=1 00:04:52.571 --rc genhtml_legend=1 00:04:52.571 --rc geninfo_all_blocks=1 00:04:52.571 --rc geninfo_unexecuted_blocks=1 00:04:52.571 00:04:52.571 ' 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.571 --rc genhtml_branch_coverage=1 00:04:52.571 --rc genhtml_function_coverage=1 00:04:52.571 --rc genhtml_legend=1 00:04:52.571 --rc geninfo_all_blocks=1 00:04:52.571 --rc geninfo_unexecuted_blocks=1 00:04:52.571 00:04:52.571 ' 00:04:52.571 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.571 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1494120 00:04:52.571 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1494120 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1494120 ']' 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.571 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.571 11:27:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.830 [2024-11-20 11:27:56.055437] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:52.830 [2024-11-20 11:27:56.055493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494120 ] 00:04:52.830 [2024-11-20 11:27:56.132449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.830 [2024-11-20 11:27:56.179945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.088 11:27:56 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.088 11:27:56 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:53.088 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:53.347 11:27:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1494120 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1494120 ']' 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1494120 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494120 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494120' 00:04:53.347 killing process with pid 1494120 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 1494120 00:04:53.347 11:27:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 1494120 00:04:53.606 00:04:53.606 real 0m1.212s 00:04:53.606 user 0m1.185s 00:04:53.606 sys 0m0.463s 00:04:53.606 11:27:57 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.606 11:27:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.606 ************************************ 00:04:53.606 END TEST alias_rpc 00:04:53.606 ************************************ 00:04:53.606 11:27:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:53.606 11:27:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.606 11:27:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.606 11:27:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.606 11:27:57 -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 ************************************ 00:04:53.865 START TEST spdkcli_tcp 00:04:53.865 ************************************ 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.865 * Looking for test storage... 00:04:53.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.865 11:27:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.865 --rc genhtml_branch_coverage=1 00:04:53.865 --rc genhtml_function_coverage=1 00:04:53.865 --rc genhtml_legend=1 00:04:53.865 --rc geninfo_all_blocks=1 00:04:53.865 --rc geninfo_unexecuted_blocks=1 00:04:53.865 00:04:53.865 ' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.865 --rc genhtml_branch_coverage=1 00:04:53.865 --rc genhtml_function_coverage=1 00:04:53.865 --rc genhtml_legend=1 00:04:53.865 --rc geninfo_all_blocks=1 00:04:53.865 --rc geninfo_unexecuted_blocks=1 00:04:53.865 00:04:53.865 ' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.865 --rc genhtml_branch_coverage=1 00:04:53.865 --rc genhtml_function_coverage=1 00:04:53.865 --rc genhtml_legend=1 00:04:53.865 --rc geninfo_all_blocks=1 00:04:53.865 --rc geninfo_unexecuted_blocks=1 00:04:53.865 00:04:53.865 ' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.865 --rc genhtml_branch_coverage=1 00:04:53.865 --rc genhtml_function_coverage=1 00:04:53.865 --rc genhtml_legend=1 00:04:53.865 --rc geninfo_all_blocks=1 00:04:53.865 --rc geninfo_unexecuted_blocks=1 00:04:53.865 00:04:53.865 ' 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1494369 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1494369 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1494369 ']' 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.865 11:27:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.865 11:27:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.865 [2024-11-20 11:27:57.332630] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:53.865 [2024-11-20 11:27:57.332686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494369 ] 00:04:54.124 [2024-11-20 11:27:57.409368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.125 [2024-11-20 11:27:57.455212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.125 [2024-11-20 11:27:57.455214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.692 11:27:58 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.692 11:27:58 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:54.692 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1494426 00:04:54.692 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.692 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.950 [ 00:04:54.950 "bdev_malloc_delete", 00:04:54.950 "bdev_malloc_create", 00:04:54.950 "bdev_null_resize", 00:04:54.950 "bdev_null_delete", 00:04:54.950 "bdev_null_create", 00:04:54.950 "bdev_nvme_cuse_unregister", 00:04:54.950 "bdev_nvme_cuse_register", 00:04:54.950 "bdev_opal_new_user", 00:04:54.950 "bdev_opal_set_lock_state", 00:04:54.950 "bdev_opal_delete", 00:04:54.950 "bdev_opal_get_info", 00:04:54.950 "bdev_opal_create", 00:04:54.950 "bdev_nvme_opal_revert", 00:04:54.950 "bdev_nvme_opal_init", 00:04:54.950 "bdev_nvme_send_cmd", 00:04:54.950 "bdev_nvme_set_keys", 00:04:54.950 "bdev_nvme_get_path_iostat", 00:04:54.950 "bdev_nvme_get_mdns_discovery_info", 00:04:54.950 "bdev_nvme_stop_mdns_discovery", 00:04:54.950 "bdev_nvme_start_mdns_discovery", 00:04:54.950 "bdev_nvme_set_multipath_policy", 00:04:54.950 "bdev_nvme_set_preferred_path", 00:04:54.950 "bdev_nvme_get_io_paths", 00:04:54.950 "bdev_nvme_remove_error_injection", 00:04:54.950 "bdev_nvme_add_error_injection", 00:04:54.950 "bdev_nvme_get_discovery_info", 00:04:54.950 "bdev_nvme_stop_discovery", 00:04:54.950 "bdev_nvme_start_discovery", 00:04:54.950 "bdev_nvme_get_controller_health_info", 00:04:54.950 "bdev_nvme_disable_controller", 00:04:54.950 "bdev_nvme_enable_controller", 00:04:54.950 "bdev_nvme_reset_controller", 00:04:54.950 "bdev_nvme_get_transport_statistics", 00:04:54.950 "bdev_nvme_apply_firmware", 00:04:54.950 "bdev_nvme_detach_controller", 00:04:54.950 "bdev_nvme_get_controllers", 00:04:54.950 "bdev_nvme_attach_controller", 00:04:54.950 "bdev_nvme_set_hotplug", 00:04:54.950 "bdev_nvme_set_options", 00:04:54.950 "bdev_passthru_delete", 00:04:54.950 "bdev_passthru_create", 00:04:54.950 "bdev_lvol_set_parent_bdev", 00:04:54.950 "bdev_lvol_set_parent", 00:04:54.950 "bdev_lvol_check_shallow_copy", 00:04:54.950 "bdev_lvol_start_shallow_copy", 00:04:54.950 "bdev_lvol_grow_lvstore", 00:04:54.950 "bdev_lvol_get_lvols", 00:04:54.950 "bdev_lvol_get_lvstores", 00:04:54.950 "bdev_lvol_delete", 00:04:54.950 "bdev_lvol_set_read_only", 00:04:54.950 "bdev_lvol_resize", 00:04:54.950 "bdev_lvol_decouple_parent", 00:04:54.950 "bdev_lvol_inflate", 00:04:54.950 "bdev_lvol_rename", 00:04:54.950 "bdev_lvol_clone_bdev", 00:04:54.950 "bdev_lvol_clone", 00:04:54.950 "bdev_lvol_snapshot", 00:04:54.950 "bdev_lvol_create", 00:04:54.950 "bdev_lvol_delete_lvstore", 00:04:54.950 "bdev_lvol_rename_lvstore", 00:04:54.950 "bdev_lvol_create_lvstore", 00:04:54.950 "bdev_raid_set_options", 00:04:54.950 "bdev_raid_remove_base_bdev", 00:04:54.950 "bdev_raid_add_base_bdev", 00:04:54.950 "bdev_raid_delete", 00:04:54.950 "bdev_raid_create", 00:04:54.950 "bdev_raid_get_bdevs", 00:04:54.950 "bdev_error_inject_error", 00:04:54.950 "bdev_error_delete", 00:04:54.950 "bdev_error_create", 00:04:54.950 "bdev_split_delete", 00:04:54.950 "bdev_split_create", 00:04:54.950 "bdev_delay_delete", 00:04:54.950 "bdev_delay_create", 00:04:54.950 "bdev_delay_update_latency", 00:04:54.950 "bdev_zone_block_delete", 00:04:54.950 "bdev_zone_block_create", 00:04:54.950 "blobfs_create", 00:04:54.950 "blobfs_detect", 00:04:54.950 "blobfs_set_cache_size", 00:04:54.950 "bdev_aio_delete", 00:04:54.950 "bdev_aio_rescan", 00:04:54.950 "bdev_aio_create", 00:04:54.950 "bdev_ftl_set_property", 00:04:54.950 "bdev_ftl_get_properties", 00:04:54.950 "bdev_ftl_get_stats", 00:04:54.950 "bdev_ftl_unmap", 00:04:54.950 "bdev_ftl_unload", 00:04:54.950 "bdev_ftl_delete", 00:04:54.950 "bdev_ftl_load", 00:04:54.950 "bdev_ftl_create", 00:04:54.950 "bdev_virtio_attach_controller", 00:04:54.950 "bdev_virtio_scsi_get_devices", 00:04:54.950 "bdev_virtio_detach_controller", 00:04:54.950 "bdev_virtio_blk_set_hotplug", 00:04:54.950 "bdev_iscsi_delete", 00:04:54.950 "bdev_iscsi_create", 00:04:54.950 "bdev_iscsi_set_options", 00:04:54.950 "accel_error_inject_error", 00:04:54.950 "ioat_scan_accel_module", 00:04:54.950 "dsa_scan_accel_module", 00:04:54.950 "iaa_scan_accel_module", 00:04:54.950 "keyring_file_remove_key", 00:04:54.950 "keyring_file_add_key", 00:04:54.950 "keyring_linux_set_options", 00:04:54.950 "fsdev_aio_delete", 00:04:54.950 "fsdev_aio_create", 00:04:54.950 "iscsi_get_histogram", 00:04:54.950 "iscsi_enable_histogram", 00:04:54.950 "iscsi_set_options", 00:04:54.950 "iscsi_get_auth_groups", 00:04:54.950 "iscsi_auth_group_remove_secret", 00:04:54.950 "iscsi_auth_group_add_secret", 00:04:54.950 "iscsi_delete_auth_group", 00:04:54.950 "iscsi_create_auth_group", 00:04:54.950 "iscsi_set_discovery_auth", 00:04:54.950 "iscsi_get_options", 00:04:54.950 "iscsi_target_node_request_logout", 00:04:54.950 "iscsi_target_node_set_redirect", 00:04:54.950 "iscsi_target_node_set_auth", 00:04:54.950 "iscsi_target_node_add_lun", 00:04:54.950 "iscsi_get_stats", 00:04:54.950 "iscsi_get_connections", 00:04:54.950 "iscsi_portal_group_set_auth", 00:04:54.950 "iscsi_start_portal_group", 00:04:54.950 "iscsi_delete_portal_group", 00:04:54.950 "iscsi_create_portal_group", 00:04:54.950 "iscsi_get_portal_groups", 00:04:54.950 "iscsi_delete_target_node", 00:04:54.950 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.950 "iscsi_target_node_add_pg_ig_maps", 00:04:54.950 "iscsi_create_target_node", 00:04:54.950 "iscsi_get_target_nodes", 00:04:54.950 "iscsi_delete_initiator_group", 00:04:54.950 "iscsi_initiator_group_remove_initiators", 00:04:54.950 "iscsi_initiator_group_add_initiators", 00:04:54.950 "iscsi_create_initiator_group", 00:04:54.950 "iscsi_get_initiator_groups", 00:04:54.950 "nvmf_set_crdt", 00:04:54.950 "nvmf_set_config", 00:04:54.950 "nvmf_set_max_subsystems", 00:04:54.950 "nvmf_stop_mdns_prr", 00:04:54.950 "nvmf_publish_mdns_prr", 00:04:54.950 "nvmf_subsystem_get_listeners", 00:04:54.950 "nvmf_subsystem_get_qpairs", 00:04:54.950 "nvmf_subsystem_get_controllers", 00:04:54.950 "nvmf_get_stats", 00:04:54.950 "nvmf_get_transports", 00:04:54.950 "nvmf_create_transport", 00:04:54.950 "nvmf_get_targets", 00:04:54.950 "nvmf_delete_target", 00:04:54.950 "nvmf_create_target", 00:04:54.950 "nvmf_subsystem_allow_any_host", 00:04:54.950 "nvmf_subsystem_set_keys", 00:04:54.950 "nvmf_subsystem_remove_host", 00:04:54.951 "nvmf_subsystem_add_host", 00:04:54.951 "nvmf_ns_remove_host", 00:04:54.951 "nvmf_ns_add_host", 00:04:54.951 "nvmf_subsystem_remove_ns", 00:04:54.951 "nvmf_subsystem_set_ns_ana_group", 00:04:54.951 "nvmf_subsystem_add_ns", 00:04:54.951 "nvmf_subsystem_listener_set_ana_state", 00:04:54.951 "nvmf_discovery_get_referrals", 00:04:54.951 "nvmf_discovery_remove_referral", 00:04:54.951 "nvmf_discovery_add_referral", 00:04:54.951 "nvmf_subsystem_remove_listener", 00:04:54.951 "nvmf_subsystem_add_listener", 00:04:54.951 "nvmf_delete_subsystem", 00:04:54.951 "nvmf_create_subsystem", 00:04:54.951 "nvmf_get_subsystems", 00:04:54.951 "env_dpdk_get_mem_stats", 00:04:54.951 "nbd_get_disks", 00:04:54.951 "nbd_stop_disk", 00:04:54.951 "nbd_start_disk", 00:04:54.951 "ublk_recover_disk", 00:04:54.951 "ublk_get_disks", 00:04:54.951 "ublk_stop_disk", 00:04:54.951 "ublk_start_disk", 00:04:54.951 "ublk_destroy_target", 00:04:54.951 "ublk_create_target", 00:04:54.951 "virtio_blk_create_transport", 00:04:54.951 "virtio_blk_get_transports", 00:04:54.951 "vhost_controller_set_coalescing", 00:04:54.951 "vhost_get_controllers", 00:04:54.951 "vhost_delete_controller", 00:04:54.951 "vhost_create_blk_controller", 00:04:54.951 "vhost_scsi_controller_remove_target", 00:04:54.951 "vhost_scsi_controller_add_target", 00:04:54.951 "vhost_start_scsi_controller", 00:04:54.951 "vhost_create_scsi_controller", 00:04:54.951 "thread_set_cpumask", 00:04:54.951 "scheduler_set_options", 00:04:54.951 "framework_get_governor", 00:04:54.951 "framework_get_scheduler", 00:04:54.951 "framework_set_scheduler", 00:04:54.951 "framework_get_reactors", 00:04:54.951 "thread_get_io_channels", 00:04:54.951 "thread_get_pollers", 00:04:54.951 "thread_get_stats", 00:04:54.951 "framework_monitor_context_switch", 00:04:54.951 "spdk_kill_instance", 00:04:54.951 "log_enable_timestamps", 00:04:54.951 "log_get_flags", 00:04:54.951 "log_clear_flag", 00:04:54.951 "log_set_flag", 00:04:54.951 "log_get_level", 00:04:54.951 "log_set_level", 00:04:54.951 "log_get_print_level", 00:04:54.951 "log_set_print_level", 00:04:54.951 "framework_enable_cpumask_locks", 00:04:54.951 "framework_disable_cpumask_locks", 00:04:54.951 "framework_wait_init", 00:04:54.951 "framework_start_init", 00:04:54.951 "scsi_get_devices", 00:04:54.951 "bdev_get_histogram", 00:04:54.951 "bdev_enable_histogram", 00:04:54.951 "bdev_set_qos_limit", 00:04:54.951 "bdev_set_qd_sampling_period", 00:04:54.951 "bdev_get_bdevs", 00:04:54.951 "bdev_reset_iostat", 00:04:54.951 "bdev_get_iostat", 00:04:54.951 "bdev_examine", 00:04:54.951 "bdev_wait_for_examine", 00:04:54.951 "bdev_set_options", 00:04:54.951 "accel_get_stats", 00:04:54.951 "accel_set_options", 00:04:54.951 "accel_set_driver", 00:04:54.951 "accel_crypto_key_destroy", 00:04:54.951 "accel_crypto_keys_get", 00:04:54.951 "accel_crypto_key_create", 00:04:54.951 "accel_assign_opc", 00:04:54.951 "accel_get_module_info", 00:04:54.951 "accel_get_opc_assignments", 00:04:54.951 "vmd_rescan", 00:04:54.951 "vmd_remove_device", 00:04:54.951 "vmd_enable", 00:04:54.951 "sock_get_default_impl", 00:04:54.951 "sock_set_default_impl", 00:04:54.951 "sock_impl_set_options", 00:04:54.951 "sock_impl_get_options", 00:04:54.951 "iobuf_get_stats", 00:04:54.951 "iobuf_set_options", 00:04:54.951 "keyring_get_keys", 00:04:54.951 "framework_get_pci_devices", 00:04:54.951 "framework_get_config", 00:04:54.951 "framework_get_subsystems", 00:04:54.951 "fsdev_set_opts", 00:04:54.951 "fsdev_get_opts", 00:04:54.951 "trace_get_info", 00:04:54.951 "trace_get_tpoint_group_mask", 00:04:54.951 "trace_disable_tpoint_group", 00:04:54.951 "trace_enable_tpoint_group", 00:04:54.951 "trace_clear_tpoint_mask", 00:04:54.951 "trace_set_tpoint_mask", 00:04:54.951 "notify_get_notifications", 00:04:54.951 "notify_get_types", 00:04:54.951 "spdk_get_version", 00:04:54.951 "rpc_get_methods" 00:04:54.951 ] 00:04:54.951 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.951 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.951 11:27:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1494369 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1494369 ']' 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1494369 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.951 11:27:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494369 00:04:55.209 11:27:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.209 11:27:58 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.209 11:27:58 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494369' 00:04:55.209 killing process with pid 1494369 00:04:55.209 11:27:58 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1494369 00:04:55.209 11:27:58 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1494369 00:04:55.468 00:04:55.468 real 0m1.681s 00:04:55.468 user 0m3.090s 00:04:55.468 sys 0m0.535s 00:04:55.468 11:27:58 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.468 11:27:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.468 ************************************ 00:04:55.468 END TEST spdkcli_tcp 00:04:55.468 ************************************ 00:04:55.468 11:27:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.468 11:27:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.468 11:27:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.468 11:27:58 -- common/autotest_common.sh@10 -- # set +x 00:04:55.468 ************************************ 00:04:55.468 START TEST dpdk_mem_utility 00:04:55.468 ************************************ 00:04:55.468 11:27:58 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.727 * Looking for test storage... 00:04:55.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:55.727 11:27:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.727 11:27:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.727 11:27:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.727 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.727 11:27:59 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.728 11:27:59 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.728 --rc genhtml_branch_coverage=1 00:04:55.728 --rc genhtml_function_coverage=1 00:04:55.728 --rc genhtml_legend=1 00:04:55.728 --rc geninfo_all_blocks=1 00:04:55.728 --rc geninfo_unexecuted_blocks=1 00:04:55.728 00:04:55.728 ' 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.728 --rc genhtml_branch_coverage=1 00:04:55.728 --rc genhtml_function_coverage=1 00:04:55.728 --rc genhtml_legend=1 00:04:55.728 --rc geninfo_all_blocks=1 00:04:55.728 --rc geninfo_unexecuted_blocks=1 00:04:55.728 00:04:55.728 ' 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.728 --rc genhtml_branch_coverage=1 00:04:55.728 --rc genhtml_function_coverage=1 00:04:55.728 --rc genhtml_legend=1 00:04:55.728 --rc geninfo_all_blocks=1 00:04:55.728 --rc geninfo_unexecuted_blocks=1 00:04:55.728 00:04:55.728 ' 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.728 --rc genhtml_branch_coverage=1 00:04:55.728 --rc genhtml_function_coverage=1 00:04:55.728 --rc genhtml_legend=1 00:04:55.728 --rc geninfo_all_blocks=1 00:04:55.728 --rc geninfo_unexecuted_blocks=1 00:04:55.728 00:04:55.728 ' 00:04:55.728 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.728 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1494630 00:04:55.728 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1494630 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1494630 ']' 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.728 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.728 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.728 [2024-11-20 11:27:59.116695] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:55.728 [2024-11-20 11:27:59.116750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494630 ] 00:04:55.728 [2024-11-20 11:27:59.194650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.986 [2024-11-20 11:27:59.242230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.553 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.553 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:56.553 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.553 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.553 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.553 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.553 { 00:04:56.553 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.553 } 00:04:56.553 11:27:59 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.553 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.553 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:56.553 1 heaps totaling size 810.000000 MiB 00:04:56.553 size: 810.000000 MiB heap id: 0 00:04:56.553 end heaps---------- 00:04:56.553 9 mempools totaling size 595.772034 MiB 00:04:56.553 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.553 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.553 size: 92.545471 MiB name: bdev_io_1494630 00:04:56.553 size: 50.003479 MiB name: msgpool_1494630 00:04:56.553 size: 36.509338 MiB name: fsdev_io_1494630 00:04:56.553 size: 21.763794 MiB name: PDU_Pool 00:04:56.553 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.553 size: 4.133484 MiB name: evtpool_1494630 00:04:56.553 size: 0.026123 MiB name: Session_Pool 00:04:56.553 end mempools------- 00:04:56.553 6 memzones totaling size 4.142822 MiB 00:04:56.553 size: 1.000366 MiB name: RG_ring_0_1494630 00:04:56.553 size: 1.000366 MiB name: RG_ring_1_1494630 00:04:56.553 size: 1.000366 MiB name: RG_ring_4_1494630 00:04:56.553 size: 1.000366 MiB name: RG_ring_5_1494630 00:04:56.553 size: 0.125366 MiB name: RG_ring_2_1494630 00:04:56.553 size: 0.015991 MiB name: RG_ring_3_1494630 00:04:56.553 end memzones------- 00:04:56.553 11:27:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.811 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:56.811 list of free elements. size: 10.862488 MiB 00:04:56.811 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:56.811 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:56.811 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:56.812 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:56.812 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:56.812 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:56.812 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:56.812 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:56.812 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:56.812 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:56.812 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:56.812 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:56.812 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:56.812 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:56.812 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:56.812 list of standard malloc elements. size: 199.218628 MiB 00:04:56.812 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:56.812 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:56.812 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:56.812 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:56.812 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:56.812 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:56.812 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:56.812 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:56.812 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:56.812 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:56.812 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:56.812 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:56.812 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:56.812 list of memzone associated elements. size: 599.918884 MiB 00:04:56.812 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:56.812 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.812 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:56.812 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.812 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:56.812 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1494630_0 00:04:56.812 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:56.812 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1494630_0 00:04:56.812 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:56.812 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1494630_0 00:04:56.812 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:56.812 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.812 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:56.812 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.812 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:56.812 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1494630_0 00:04:56.812 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:56.812 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1494630 00:04:56.812 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:56.812 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1494630 00:04:56.812 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:56.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.812 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:56.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.812 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:56.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.812 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:56.812 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.812 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:56.812 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1494630 00:04:56.812 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:56.812 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1494630 00:04:56.812 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:56.812 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1494630 00:04:56.812 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:56.812 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1494630 00:04:56.812 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:56.812 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1494630 00:04:56.812 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:56.812 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1494630 00:04:56.812 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:56.812 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.812 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:56.812 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.812 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:56.812 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.812 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:56.812 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1494630 00:04:56.812 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:56.812 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1494630 00:04:56.812 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:56.812 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.812 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:56.812 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.812 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:56.812 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1494630 00:04:56.812 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:56.812 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.812 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:56.812 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1494630 00:04:56.812 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:56.812 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1494630 00:04:56.812 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:56.812 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1494630 00:04:56.812 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:56.812 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.812 11:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.812 11:28:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1494630 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1494630 ']' 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1494630 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1494630 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1494630' 00:04:56.812 killing process with pid 1494630 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1494630 00:04:56.812 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1494630 00:04:57.070 00:04:57.070 real 0m1.582s 00:04:57.070 user 0m1.591s 00:04:57.070 sys 0m0.521s 00:04:57.070 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.070 11:28:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 ************************************ 00:04:57.070 END TEST dpdk_mem_utility 00:04:57.070 ************************************ 00:04:57.070 11:28:00 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:57.071 11:28:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.071 11:28:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.071 11:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:57.071 ************************************ 00:04:57.071 START TEST event 00:04:57.071 ************************************ 00:04:57.071 11:28:00 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:57.329 * Looking for test storage... 00:04:57.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.329 11:28:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.329 11:28:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.329 11:28:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.329 11:28:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.329 11:28:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.329 11:28:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.329 11:28:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.329 11:28:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.329 11:28:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.329 11:28:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.329 11:28:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.329 11:28:00 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.329 11:28:00 event -- scripts/common.sh@345 -- # : 1 00:04:57.329 11:28:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.329 11:28:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.329 11:28:00 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.329 11:28:00 event -- scripts/common.sh@353 -- # local d=1 00:04:57.329 11:28:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.329 11:28:00 event -- scripts/common.sh@355 -- # echo 1 00:04:57.329 11:28:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.329 11:28:00 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.329 11:28:00 event -- scripts/common.sh@353 -- # local d=2 00:04:57.329 11:28:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.329 11:28:00 event -- scripts/common.sh@355 -- # echo 2 00:04:57.329 11:28:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.329 11:28:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.329 11:28:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.329 11:28:00 event -- scripts/common.sh@368 -- # return 0 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.329 --rc genhtml_branch_coverage=1 00:04:57.329 --rc genhtml_function_coverage=1 00:04:57.329 --rc genhtml_legend=1 00:04:57.329 --rc geninfo_all_blocks=1 00:04:57.329 --rc geninfo_unexecuted_blocks=1 00:04:57.329 00:04:57.329 ' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.329 --rc genhtml_branch_coverage=1 00:04:57.329 --rc genhtml_function_coverage=1 00:04:57.329 --rc genhtml_legend=1 00:04:57.329 --rc geninfo_all_blocks=1 00:04:57.329 --rc geninfo_unexecuted_blocks=1 00:04:57.329 00:04:57.329 ' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.329 --rc genhtml_branch_coverage=1 00:04:57.329 --rc genhtml_function_coverage=1 00:04:57.329 --rc genhtml_legend=1 00:04:57.329 --rc geninfo_all_blocks=1 00:04:57.329 --rc geninfo_unexecuted_blocks=1 00:04:57.329 00:04:57.329 ' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.329 --rc genhtml_branch_coverage=1 00:04:57.329 --rc genhtml_function_coverage=1 00:04:57.329 --rc genhtml_legend=1 00:04:57.329 --rc geninfo_all_blocks=1 00:04:57.329 --rc geninfo_unexecuted_blocks=1 00:04:57.329 00:04:57.329 ' 00:04:57.329 11:28:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:57.329 11:28:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.329 11:28:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:57.329 11:28:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.329 11:28:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.329 ************************************ 00:04:57.329 START TEST event_perf 00:04:57.329 ************************************ 00:04:57.329 11:28:00 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.329 Running I/O for 1 seconds...[2024-11-20 11:28:00.790952] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:57.329 [2024-11-20 11:28:00.791043] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494887 ] 00:04:57.587 [2024-11-20 11:28:00.872317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.587 [2024-11-20 11:28:00.922759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.587 [2024-11-20 11:28:00.922844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.587 [2024-11-20 11:28:00.922921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.587 [2024-11-20 11:28:00.922923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.518 Running I/O for 1 seconds... 00:04:58.518 lcore 0: 206431 00:04:58.518 lcore 1: 206431 00:04:58.518 lcore 2: 206430 00:04:58.518 lcore 3: 206430 00:04:58.518 done. 00:04:58.518 00:04:58.518 real 0m1.203s 00:04:58.518 user 0m4.106s 00:04:58.518 sys 0m0.093s 00:04:58.518 11:28:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.518 11:28:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.518 ************************************ 00:04:58.518 END TEST event_perf 00:04:58.518 ************************************ 00:04:58.776 11:28:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.776 11:28:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:58.776 11:28:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.776 11:28:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.776 ************************************ 00:04:58.776 START TEST event_reactor 00:04:58.776 ************************************ 00:04:58.776 11:28:02 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.776 [2024-11-20 11:28:02.068925] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:58.776 [2024-11-20 11:28:02.068994] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495091 ] 00:04:58.776 [2024-11-20 11:28:02.150253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.776 [2024-11-20 11:28:02.196358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.146 test_start 00:05:00.146 oneshot 00:05:00.146 tick 100 00:05:00.146 tick 100 00:05:00.146 tick 250 00:05:00.146 tick 100 00:05:00.146 tick 100 00:05:00.146 tick 100 00:05:00.146 tick 250 00:05:00.146 tick 500 00:05:00.146 tick 100 00:05:00.146 tick 100 00:05:00.146 tick 250 00:05:00.146 tick 100 00:05:00.146 tick 100 00:05:00.146 test_end 00:05:00.146 00:05:00.146 real 0m1.191s 00:05:00.146 user 0m1.100s 00:05:00.146 sys 0m0.086s 00:05:00.146 11:28:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.146 11:28:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.146 ************************************ 00:05:00.146 END TEST event_reactor 00:05:00.146 ************************************ 00:05:00.146 11:28:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.146 11:28:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.146 11:28:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.146 11:28:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.146 ************************************ 00:05:00.146 START TEST event_reactor_perf 00:05:00.146 ************************************ 00:05:00.146 11:28:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.146 [2024-11-20 11:28:03.332645] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:00.146 [2024-11-20 11:28:03.332724] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495294 ] 00:05:00.146 [2024-11-20 11:28:03.412070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.146 [2024-11-20 11:28:03.457470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.073 test_start 00:05:01.073 test_end 00:05:01.073 Performance: 513893 events per second 00:05:01.073 00:05:01.073 real 0m1.193s 00:05:01.073 user 0m1.108s 00:05:01.073 sys 0m0.080s 00:05:01.073 11:28:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.073 11:28:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.073 ************************************ 00:05:01.073 END TEST event_reactor_perf 00:05:01.073 ************************************ 00:05:01.073 11:28:04 event -- event/event.sh@49 -- # uname -s 00:05:01.073 11:28:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:01.073 11:28:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:01.073 11:28:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.073 11:28:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.073 11:28:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.329 ************************************ 00:05:01.329 START TEST event_scheduler 00:05:01.329 ************************************ 00:05:01.329 11:28:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:01.329 * Looking for test storage... 00:05:01.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:01.329 11:28:04 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.329 11:28:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.329 11:28:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.329 11:28:04 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.329 11:28:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.330 11:28:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.330 --rc genhtml_branch_coverage=1 00:05:01.330 --rc genhtml_function_coverage=1 00:05:01.330 --rc genhtml_legend=1 00:05:01.330 --rc geninfo_all_blocks=1 00:05:01.330 --rc geninfo_unexecuted_blocks=1 00:05:01.330 00:05:01.330 ' 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.330 --rc genhtml_branch_coverage=1 00:05:01.330 --rc genhtml_function_coverage=1 00:05:01.330 --rc genhtml_legend=1 00:05:01.330 --rc geninfo_all_blocks=1 00:05:01.330 --rc geninfo_unexecuted_blocks=1 00:05:01.330 00:05:01.330 ' 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.330 --rc genhtml_branch_coverage=1 00:05:01.330 --rc genhtml_function_coverage=1 00:05:01.330 --rc genhtml_legend=1 00:05:01.330 --rc geninfo_all_blocks=1 00:05:01.330 --rc geninfo_unexecuted_blocks=1 00:05:01.330 00:05:01.330 ' 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.330 --rc genhtml_branch_coverage=1 00:05:01.330 --rc genhtml_function_coverage=1 00:05:01.330 --rc genhtml_legend=1 00:05:01.330 --rc geninfo_all_blocks=1 00:05:01.330 --rc geninfo_unexecuted_blocks=1 00:05:01.330 00:05:01.330 ' 00:05:01.330 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:01.330 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1495527 00:05:01.330 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.330 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:01.330 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1495527 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1495527 ']' 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.330 11:28:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.587 [2024-11-20 11:28:04.825503] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:01.587 [2024-11-20 11:28:04.825566] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495527 ] 00:05:01.587 [2024-11-20 11:28:04.897533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.587 [2024-11-20 11:28:04.944498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.587 [2024-11-20 11:28:04.944523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.587 [2024-11-20 11:28:04.944600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.587 [2024-11-20 11:28:04.944602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.587 11:28:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.587 11:28:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:01.587 11:28:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:01.587 11:28:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.587 11:28:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.587 [2024-11-20 11:28:04.997249] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:01.587 [2024-11-20 11:28:04.997270] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:01.587 [2024-11-20 11:28:04.997282] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:01.587 [2024-11-20 11:28:04.997290] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:01.587 [2024-11-20 11:28:04.997297] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:01.587 11:28:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.587 11:28:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:01.587 11:28:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.587 11:28:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 [2024-11-20 11:28:05.074949] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.844 11:28:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.844 11:28:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.844 11:28:05 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 ************************************ 00:05:01.844 START TEST scheduler_create_thread 00:05:01.844 ************************************ 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 2 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 3 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 4 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 5 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 6 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 7 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 8 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 9 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 10 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.844 11:28:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.213 11:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.213 11:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.213 11:28:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.213 11:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.213 11:28:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.583 11:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.583 00:05:04.583 real 0m2.622s 00:05:04.583 user 0m0.022s 00:05:04.583 sys 0m0.009s 00:05:04.583 11:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.583 11:28:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.583 ************************************ 00:05:04.583 END TEST scheduler_create_thread 00:05:04.583 ************************************ 00:05:04.583 11:28:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:04.583 11:28:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1495527 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1495527 ']' 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1495527 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495527 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495527' 00:05:04.583 killing process with pid 1495527 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1495527 00:05:04.583 11:28:07 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1495527 00:05:04.841 [2024-11-20 11:28:08.221489] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.098 00:05:05.098 real 0m3.831s 00:05:05.098 user 0m5.702s 00:05:05.098 sys 0m0.439s 00:05:05.098 11:28:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.098 11:28:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.098 ************************************ 00:05:05.098 END TEST event_scheduler 00:05:05.098 ************************************ 00:05:05.098 11:28:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.099 11:28:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.099 11:28:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.099 11:28:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.099 11:28:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.099 ************************************ 00:05:05.099 START TEST app_repeat 00:05:05.099 ************************************ 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1496126 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1496126' 00:05:05.099 Process app_repeat pid: 1496126 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.099 spdk_app_start Round 0 00:05:05.099 11:28:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1496126 /var/tmp/spdk-nbd.sock 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1496126 ']' 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.099 11:28:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.099 [2024-11-20 11:28:08.545423] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:05.099 [2024-11-20 11:28:08.545490] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496126 ] 00:05:05.356 [2024-11-20 11:28:08.624314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.356 [2024-11-20 11:28:08.677053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.356 [2024-11-20 11:28:08.677056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.356 11:28:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.356 11:28:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.356 11:28:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.612 Malloc0 00:05:05.612 11:28:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.872 Malloc1 00:05:05.872 11:28:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.872 11:28:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.133 /dev/nbd0 00:05:06.133 11:28:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.133 11:28:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.133 1+0 records in 00:05:06.133 1+0 records out 00:05:06.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257418 s, 15.9 MB/s 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.133 11:28:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.133 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.133 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.133 11:28:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.390 /dev/nbd1 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.390 1+0 records in 00:05:06.390 1+0 records out 00:05:06.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027038 s, 15.1 MB/s 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.390 11:28:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.390 11:28:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.648 { 00:05:06.648 "nbd_device": "/dev/nbd0", 00:05:06.648 "bdev_name": "Malloc0" 00:05:06.648 }, 00:05:06.648 { 00:05:06.648 "nbd_device": "/dev/nbd1", 00:05:06.648 "bdev_name": "Malloc1" 00:05:06.648 } 00:05:06.648 ]' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.648 { 00:05:06.648 "nbd_device": "/dev/nbd0", 00:05:06.648 "bdev_name": "Malloc0" 00:05:06.648 }, 00:05:06.648 { 00:05:06.648 "nbd_device": "/dev/nbd1", 00:05:06.648 "bdev_name": "Malloc1" 00:05:06.648 } 00:05:06.648 ]' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.648 /dev/nbd1' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.648 /dev/nbd1' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.648 256+0 records in 00:05:06.648 256+0 records out 00:05:06.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104815 s, 100 MB/s 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.648 256+0 records in 00:05:06.648 256+0 records out 00:05:06.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196833 s, 53.3 MB/s 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.648 256+0 records in 00:05:06.648 256+0 records out 00:05:06.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174463 s, 60.1 MB/s 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.648 11:28:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.648 11:28:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.905 11:28:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.162 11:28:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.163 11:28:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.163 11:28:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.163 11:28:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.163 11:28:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.420 11:28:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.420 11:28:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.678 11:28:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.678 [2024-11-20 11:28:11.081071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.678 [2024-11-20 11:28:11.125392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.678 [2024-11-20 11:28:11.125394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.935 [2024-11-20 11:28:11.173325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.935 [2024-11-20 11:28:11.173368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.462 11:28:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.462 11:28:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.462 spdk_app_start Round 1 00:05:10.462 11:28:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1496126 /var/tmp/spdk-nbd.sock 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1496126 ']' 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.462 11:28:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.720 11:28:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.720 11:28:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:10.720 11:28:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.979 Malloc0 00:05:10.979 11:28:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.238 Malloc1 00:05:11.238 11:28:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.238 11:28:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.498 /dev/nbd0 00:05:11.498 11:28:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.498 11:28:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.498 1+0 records in 00:05:11.498 1+0 records out 00:05:11.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417262 s, 9.8 MB/s 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.498 11:28:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.498 11:28:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.498 11:28:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.498 11:28:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.498 /dev/nbd1 00:05:11.756 11:28:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.756 11:28:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.756 11:28:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.757 11:28:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.757 11:28:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.757 1+0 records in 00:05:11.757 1+0 records out 00:05:11.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201256 s, 20.4 MB/s 00:05:11.757 11:28:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:11.757 11:28:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.757 11:28:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:11.757 11:28:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.757 11:28:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd0", 00:05:11.757 "bdev_name": "Malloc0" 00:05:11.757 }, 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd1", 00:05:11.757 "bdev_name": "Malloc1" 00:05:11.757 } 00:05:11.757 ]' 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd0", 00:05:11.757 "bdev_name": "Malloc0" 00:05:11.757 }, 00:05:11.757 { 00:05:11.757 "nbd_device": "/dev/nbd1", 00:05:11.757 "bdev_name": "Malloc1" 00:05:11.757 } 00:05:11.757 ]' 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.757 11:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.757 /dev/nbd1' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.016 /dev/nbd1' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.016 256+0 records in 00:05:12.016 256+0 records out 00:05:12.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107633 s, 97.4 MB/s 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.016 256+0 records in 00:05:12.016 256+0 records out 00:05:12.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193211 s, 54.3 MB/s 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.016 256+0 records in 00:05:12.016 256+0 records out 00:05:12.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207467 s, 50.5 MB/s 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.016 11:28:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.274 11:28:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.533 11:28:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.533 11:28:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.533 11:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.533 11:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.533 11:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.791 11:28:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.791 11:28:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.791 11:28:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.791 11:28:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.791 11:28:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.791 11:28:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.791 11:28:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.050 [2024-11-20 11:28:16.396842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.050 [2024-11-20 11:28:16.441480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.050 [2024-11-20 11:28:16.441482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.050 [2024-11-20 11:28:16.490094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.050 [2024-11-20 11:28:16.490140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.332 11:28:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.332 11:28:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.332 spdk_app_start Round 2 00:05:16.332 11:28:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1496126 /var/tmp/spdk-nbd.sock 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1496126 ']' 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.332 11:28:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:16.332 11:28:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.332 Malloc0 00:05:16.332 11:28:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.591 Malloc1 00:05:16.591 11:28:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.591 11:28:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.591 /dev/nbd0 00:05:16.591 11:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.591 11:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.591 11:28:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.849 1+0 records in 00:05:16.849 1+0 records out 00:05:16.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225159 s, 18.2 MB/s 00:05:16.849 11:28:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.849 11:28:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.849 11:28:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:16.849 11:28:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.849 11:28:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.849 11:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.849 11:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.849 11:28:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.849 /dev/nbd1 00:05:16.849 11:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.107 11:28:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.107 11:28:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.108 1+0 records in 00:05:17.108 1+0 records out 00:05:17.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235527 s, 17.4 MB/s 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.108 11:28:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.108 { 00:05:17.108 "nbd_device": "/dev/nbd0", 00:05:17.108 "bdev_name": "Malloc0" 00:05:17.108 }, 00:05:17.108 { 00:05:17.108 "nbd_device": "/dev/nbd1", 00:05:17.108 "bdev_name": "Malloc1" 00:05:17.108 } 00:05:17.108 ]' 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.108 { 00:05:17.108 "nbd_device": "/dev/nbd0", 00:05:17.108 "bdev_name": "Malloc0" 00:05:17.108 }, 00:05:17.108 { 00:05:17.108 "nbd_device": "/dev/nbd1", 00:05:17.108 "bdev_name": "Malloc1" 00:05:17.108 } 00:05:17.108 ]' 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.108 11:28:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.108 /dev/nbd1' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.366 /dev/nbd1' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.366 256+0 records in 00:05:17.366 256+0 records out 00:05:17.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101427 s, 103 MB/s 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.366 256+0 records in 00:05:17.366 256+0 records out 00:05:17.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200498 s, 52.3 MB/s 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.366 256+0 records in 00:05:17.366 256+0 records out 00:05:17.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167709 s, 62.5 MB/s 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.366 11:28:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.625 11:28:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.883 11:28:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.884 11:28:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.884 11:28:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.142 11:28:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.435 [2024-11-20 11:28:21.733007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.435 [2024-11-20 11:28:21.777427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.435 [2024-11-20 11:28:21.777429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.435 [2024-11-20 11:28:21.825400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.435 [2024-11-20 11:28:21.825447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.804 11:28:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1496126 /var/tmp/spdk-nbd.sock 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1496126 ']' 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.804 11:28:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1496126 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1496126 ']' 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1496126 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496126 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.804 11:28:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496126' 00:05:21.804 killing process with pid 1496126 00:05:21.805 11:28:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1496126 00:05:21.805 11:28:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1496126 00:05:21.805 spdk_app_start is called in Round 0. 00:05:21.805 Shutdown signal received, stop current app iteration 00:05:21.805 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:21.805 spdk_app_start is called in Round 1. 00:05:21.805 Shutdown signal received, stop current app iteration 00:05:21.805 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:21.805 spdk_app_start is called in Round 2. 00:05:21.805 Shutdown signal received, stop current app iteration 00:05:21.805 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:21.805 spdk_app_start is called in Round 3. 00:05:21.805 Shutdown signal received, stop current app iteration 00:05:21.805 11:28:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:21.805 11:28:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:21.805 00:05:21.805 real 0m16.462s 00:05:21.805 user 0m35.538s 00:05:21.805 sys 0m3.075s 00:05:21.805 11:28:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.805 11:28:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.805 ************************************ 00:05:21.805 END TEST app_repeat 00:05:21.805 ************************************ 00:05:21.805 11:28:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:21.805 11:28:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:21.805 11:28:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.805 11:28:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.805 11:28:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.805 ************************************ 00:05:21.805 START TEST cpu_locks 00:05:21.805 ************************************ 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:21.805 * Looking for test storage... 00:05:21.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.805 11:28:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.805 --rc genhtml_branch_coverage=1 00:05:21.805 --rc genhtml_function_coverage=1 00:05:21.805 --rc genhtml_legend=1 00:05:21.805 --rc geninfo_all_blocks=1 00:05:21.805 --rc geninfo_unexecuted_blocks=1 00:05:21.805 00:05:21.805 ' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.805 --rc genhtml_branch_coverage=1 00:05:21.805 --rc genhtml_function_coverage=1 00:05:21.805 --rc genhtml_legend=1 00:05:21.805 --rc geninfo_all_blocks=1 00:05:21.805 --rc geninfo_unexecuted_blocks=1 00:05:21.805 00:05:21.805 ' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.805 --rc genhtml_branch_coverage=1 00:05:21.805 --rc genhtml_function_coverage=1 00:05:21.805 --rc genhtml_legend=1 00:05:21.805 --rc geninfo_all_blocks=1 00:05:21.805 --rc geninfo_unexecuted_blocks=1 00:05:21.805 00:05:21.805 ' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.805 --rc genhtml_branch_coverage=1 00:05:21.805 --rc genhtml_function_coverage=1 00:05:21.805 --rc genhtml_legend=1 00:05:21.805 --rc geninfo_all_blocks=1 00:05:21.805 --rc geninfo_unexecuted_blocks=1 00:05:21.805 00:05:21.805 ' 00:05:21.805 11:28:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:21.805 11:28:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:21.805 11:28:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:21.805 11:28:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.805 11:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.805 ************************************ 00:05:21.805 START TEST default_locks 00:05:21.805 ************************************ 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1498547 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1498547 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1498547 ']' 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.805 11:28:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 [2024-11-20 11:28:25.323767] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:22.064 [2024-11-20 11:28:25.323828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498547 ] 00:05:22.064 [2024-11-20 11:28:25.400909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.064 [2024-11-20 11:28:25.447280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.998 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.998 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:22.998 11:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1498547 00:05:22.998 11:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1498547 00:05:22.998 11:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.563 lslocks: write error 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1498547 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1498547 ']' 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1498547 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498547 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498547' 00:05:23.563 killing process with pid 1498547 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1498547 00:05:23.563 11:28:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1498547 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1498547 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1498547 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1498547 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1498547 ']' 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.822 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1498547) - No such process 00:05:23.822 ERROR: process (pid: 1498547) is no longer running 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.822 00:05:23.822 real 0m1.883s 00:05:23.822 user 0m1.963s 00:05:23.822 sys 0m0.754s 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.822 11:28:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.822 ************************************ 00:05:23.822 END TEST default_locks 00:05:23.822 ************************************ 00:05:23.822 11:28:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.822 11:28:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.822 11:28:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.822 11:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.822 ************************************ 00:05:23.822 START TEST default_locks_via_rpc 00:05:23.822 ************************************ 00:05:23.822 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1498936 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1498936 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1498936 ']' 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.823 11:28:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.823 [2024-11-20 11:28:27.291066] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:23.823 [2024-11-20 11:28:27.291131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498936 ] 00:05:24.081 [2024-11-20 11:28:27.369204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.081 [2024-11-20 11:28:27.417425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1498936 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.013 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1498936 ']' 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498936' 00:05:25.577 killing process with pid 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1498936 00:05:25.577 11:28:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1498936 00:05:25.835 00:05:25.835 real 0m1.951s 00:05:25.835 user 0m2.058s 00:05:25.835 sys 0m0.692s 00:05:25.835 11:28:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.835 11:28:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.835 ************************************ 00:05:25.835 END TEST default_locks_via_rpc 00:05:25.835 ************************************ 00:05:25.835 11:28:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.835 11:28:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.835 11:28:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.835 11:28:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.835 ************************************ 00:05:25.835 START TEST non_locking_app_on_locked_coremask 00:05:25.835 ************************************ 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1499160 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1499160 /var/tmp/spdk.sock 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1499160 ']' 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.836 11:28:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.094 [2024-11-20 11:28:29.320506] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:26.094 [2024-11-20 11:28:29.320568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499160 ] 00:05:26.094 [2024-11-20 11:28:29.397149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.094 [2024-11-20 11:28:29.445069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1499337 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1499337 /var/tmp/spdk2.sock 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1499337 ']' 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.028 11:28:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.028 [2024-11-20 11:28:30.214504] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:27.028 [2024-11-20 11:28:30.214567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499337 ] 00:05:27.028 [2024-11-20 11:28:30.333309] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.028 [2024-11-20 11:28:30.333343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.028 [2024-11-20 11:28:30.421599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.594 11:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.594 11:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.594 11:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1499160 00:05:27.594 11:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1499160 00:05:27.594 11:28:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.965 lslocks: write error 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1499160 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1499160 ']' 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1499160 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1499160 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1499160' 00:05:28.965 killing process with pid 1499160 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1499160 00:05:28.965 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1499160 00:05:29.529 11:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1499337 00:05:29.529 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1499337 ']' 00:05:29.529 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1499337 00:05:29.529 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1499337 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1499337' 00:05:29.787 killing process with pid 1499337 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1499337 00:05:29.787 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1499337 00:05:30.045 00:05:30.045 real 0m4.140s 00:05:30.045 user 0m4.420s 00:05:30.045 sys 0m1.443s 00:05:30.045 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.045 11:28:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.045 ************************************ 00:05:30.045 END TEST non_locking_app_on_locked_coremask 00:05:30.045 ************************************ 00:05:30.045 11:28:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.045 11:28:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.045 11:28:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.045 11:28:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.045 ************************************ 00:05:30.045 START TEST locking_app_on_unlocked_coremask 00:05:30.045 ************************************ 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1499743 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1499743 /var/tmp/spdk.sock 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1499743 ']' 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.045 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.303 [2024-11-20 11:28:33.544754] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:30.303 [2024-11-20 11:28:33.544816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499743 ] 00:05:30.303 [2024-11-20 11:28:33.621926] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.303 [2024-11-20 11:28:33.621958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.303 [2024-11-20 11:28:33.670553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1499866 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1499866 /var/tmp/spdk2.sock 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1499866 ']' 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.560 11:28:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 [2024-11-20 11:28:33.940770] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:30.560 [2024-11-20 11:28:33.940841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499866 ] 00:05:30.817 [2024-11-20 11:28:34.049578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.817 [2024-11-20 11:28:34.137511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.381 11:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.381 11:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.381 11:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1499866 00:05:31.381 11:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1499866 00:05:31.381 11:28:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.752 lslocks: write error 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1499743 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1499743 ']' 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1499743 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.752 11:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1499743 00:05:32.752 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.752 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.752 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1499743' 00:05:32.752 killing process with pid 1499743 00:05:32.752 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1499743 00:05:32.752 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1499743 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1499866 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1499866 ']' 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1499866 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1499866 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1499866' 00:05:33.317 killing process with pid 1499866 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1499866 00:05:33.317 11:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1499866 00:05:33.883 00:05:33.883 real 0m3.630s 00:05:33.883 user 0m3.793s 00:05:33.883 sys 0m1.351s 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.883 ************************************ 00:05:33.883 END TEST locking_app_on_unlocked_coremask 00:05:33.883 ************************************ 00:05:33.883 11:28:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.883 11:28:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.883 11:28:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.883 11:28:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.883 ************************************ 00:05:33.883 START TEST locking_app_on_locked_coremask 00:05:33.883 ************************************ 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1500319 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1500319 /var/tmp/spdk.sock 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1500319 ']' 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.883 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.883 [2024-11-20 11:28:37.230564] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:33.883 [2024-11-20 11:28:37.230611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500319 ] 00:05:33.883 [2024-11-20 11:28:37.308107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.883 [2024-11-20 11:28:37.356192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1500329 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1500329 /var/tmp/spdk2.sock 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1500329 /var/tmp/spdk2.sock 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1500329 /var/tmp/spdk2.sock 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1500329 ']' 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.141 11:28:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.399 [2024-11-20 11:28:37.639111] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:34.399 [2024-11-20 11:28:37.639167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500329 ] 00:05:34.399 [2024-11-20 11:28:37.745673] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1500319 has claimed it. 00:05:34.399 [2024-11-20 11:28:37.745711] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.966 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1500329) - No such process 00:05:34.966 ERROR: process (pid: 1500329) is no longer running 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1500319 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1500319 00:05:34.966 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.532 lslocks: write error 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1500319 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1500319 ']' 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1500319 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500319 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500319' 00:05:35.532 killing process with pid 1500319 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1500319 00:05:35.532 11:28:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1500319 00:05:36.099 00:05:36.099 real 0m2.079s 00:05:36.099 user 0m2.209s 00:05:36.099 sys 0m0.763s 00:05:36.099 11:28:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.099 11:28:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 END TEST locking_app_on_locked_coremask 00:05:36.099 ************************************ 00:05:36.099 11:28:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:36.099 11:28:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.099 11:28:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.099 11:28:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.099 ************************************ 00:05:36.099 START TEST locking_overlapped_coremask 00:05:36.099 ************************************ 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1500662 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1500662 /var/tmp/spdk.sock 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1500662 ']' 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.099 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.100 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.100 11:28:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.100 [2024-11-20 11:28:39.410164] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:36.100 [2024-11-20 11:28:39.410222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500662 ] 00:05:36.100 [2024-11-20 11:28:39.488684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.100 [2024-11-20 11:28:39.537589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.100 [2024-11-20 11:28:39.537676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.100 [2024-11-20 11:28:39.537678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1500726 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1500726 /var/tmp/spdk2.sock 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1500726 /var/tmp/spdk2.sock 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.033 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1500726 /var/tmp/spdk2.sock 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1500726 ']' 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.034 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.034 [2024-11-20 11:28:40.305625] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:37.034 [2024-11-20 11:28:40.305687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500726 ] 00:05:37.034 [2024-11-20 11:28:40.418550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1500662 has claimed it. 00:05:37.034 [2024-11-20 11:28:40.418596] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.600 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1500726) - No such process 00:05:37.600 ERROR: process (pid: 1500726) is no longer running 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1500662 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1500662 ']' 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1500662 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.600 11:28:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500662 00:05:37.600 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.600 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.600 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500662' 00:05:37.600 killing process with pid 1500662 00:05:37.600 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1500662 00:05:37.600 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1500662 00:05:38.167 00:05:38.167 real 0m2.013s 00:05:38.167 user 0m5.758s 00:05:38.167 sys 0m0.489s 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.167 ************************************ 00:05:38.167 END TEST locking_overlapped_coremask 00:05:38.167 ************************************ 00:05:38.167 11:28:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.167 11:28:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.167 11:28:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.167 11:28:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.167 ************************************ 00:05:38.167 START TEST locking_overlapped_coremask_via_rpc 00:05:38.167 ************************************ 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1500942 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1500942 /var/tmp/spdk.sock 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1500942 ']' 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.167 11:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.167 [2024-11-20 11:28:41.488821] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:38.167 [2024-11-20 11:28:41.488873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500942 ] 00:05:38.167 [2024-11-20 11:28:41.566119] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.167 [2024-11-20 11:28:41.566149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.167 [2024-11-20 11:28:41.617508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.167 [2024-11-20 11:28:41.617592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.167 [2024-11-20 11:28:41.617594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1501120 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1501120 /var/tmp/spdk2.sock 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1501120 ']' 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.102 11:28:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.102 [2024-11-20 11:28:42.378560] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:39.102 [2024-11-20 11:28:42.378621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501120 ] 00:05:39.102 [2024-11-20 11:28:42.486933] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.102 [2024-11-20 11:28:42.486963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.361 [2024-11-20 11:28:42.582571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.361 [2024-11-20 11:28:42.586081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.361 [2024-11-20 11:28:42.586082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.930 [2024-11-20 11:28:43.228116] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1500942 has claimed it. 00:05:39.930 request: 00:05:39.930 { 00:05:39.930 "method": "framework_enable_cpumask_locks", 00:05:39.930 "req_id": 1 00:05:39.930 } 00:05:39.930 Got JSON-RPC error response 00:05:39.930 response: 00:05:39.930 { 00:05:39.930 "code": -32603, 00:05:39.930 "message": "Failed to claim CPU core: 2" 00:05:39.930 } 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1500942 /var/tmp/spdk.sock 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1500942 ']' 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.930 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.931 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.188 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1501120 /var/tmp/spdk2.sock 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1501120 ']' 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.189 00:05:40.189 real 0m2.223s 00:05:40.189 user 0m0.949s 00:05:40.189 sys 0m0.209s 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.189 11:28:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.189 ************************************ 00:05:40.189 END TEST locking_overlapped_coremask_via_rpc 00:05:40.189 ************************************ 00:05:40.447 11:28:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.447 11:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1500942 ]] 00:05:40.447 11:28:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1500942 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1500942 ']' 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1500942 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500942 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500942' 00:05:40.447 killing process with pid 1500942 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1500942 00:05:40.447 11:28:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1500942 00:05:40.706 11:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1501120 ]] 00:05:40.706 11:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1501120 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1501120 ']' 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1501120 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501120 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501120' 00:05:40.706 killing process with pid 1501120 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1501120 00:05:40.706 11:28:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1501120 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1500942 ]] 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1500942 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1500942 ']' 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1500942 00:05:41.273 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1500942) - No such process 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1500942 is not found' 00:05:41.273 Process with pid 1500942 is not found 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1501120 ]] 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1501120 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1501120 ']' 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1501120 00:05:41.273 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1501120) - No such process 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1501120 is not found' 00:05:41.273 Process with pid 1501120 is not found 00:05:41.273 11:28:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.273 00:05:41.273 real 0m19.467s 00:05:41.273 user 0m32.635s 00:05:41.273 sys 0m6.874s 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.273 11:28:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.273 ************************************ 00:05:41.273 END TEST cpu_locks 00:05:41.273 ************************************ 00:05:41.273 00:05:41.273 real 0m44.015s 00:05:41.273 user 1m20.490s 00:05:41.273 sys 0m11.064s 00:05:41.273 11:28:44 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.273 11:28:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.273 ************************************ 00:05:41.273 END TEST event 00:05:41.273 ************************************ 00:05:41.273 11:28:44 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:41.273 11:28:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.273 11:28:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.273 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.273 ************************************ 00:05:41.273 START TEST thread 00:05:41.273 ************************************ 00:05:41.273 11:28:44 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:41.273 * Looking for test storage... 00:05:41.273 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:41.273 11:28:44 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.273 11:28:44 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.273 11:28:44 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.534 11:28:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.534 11:28:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.534 11:28:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.534 11:28:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.534 11:28:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.534 11:28:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.534 11:28:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.534 11:28:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.534 11:28:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.534 11:28:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.534 11:28:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.534 11:28:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.534 11:28:44 thread -- scripts/common.sh@345 -- # : 1 00:05:41.534 11:28:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.534 11:28:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.534 11:28:44 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.534 11:28:44 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.534 11:28:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.534 11:28:44 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.534 11:28:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.534 11:28:44 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.534 11:28:44 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.534 11:28:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.534 11:28:44 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.534 11:28:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.534 11:28:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.534 11:28:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.534 11:28:44 thread -- scripts/common.sh@368 -- # return 0 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.534 --rc genhtml_branch_coverage=1 00:05:41.534 --rc genhtml_function_coverage=1 00:05:41.534 --rc genhtml_legend=1 00:05:41.534 --rc geninfo_all_blocks=1 00:05:41.534 --rc geninfo_unexecuted_blocks=1 00:05:41.534 00:05:41.534 ' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.534 --rc genhtml_branch_coverage=1 00:05:41.534 --rc genhtml_function_coverage=1 00:05:41.534 --rc genhtml_legend=1 00:05:41.534 --rc geninfo_all_blocks=1 00:05:41.534 --rc geninfo_unexecuted_blocks=1 00:05:41.534 00:05:41.534 ' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.534 --rc genhtml_branch_coverage=1 00:05:41.534 --rc genhtml_function_coverage=1 00:05:41.534 --rc genhtml_legend=1 00:05:41.534 --rc geninfo_all_blocks=1 00:05:41.534 --rc geninfo_unexecuted_blocks=1 00:05:41.534 00:05:41.534 ' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.534 --rc genhtml_branch_coverage=1 00:05:41.534 --rc genhtml_function_coverage=1 00:05:41.534 --rc genhtml_legend=1 00:05:41.534 --rc geninfo_all_blocks=1 00:05:41.534 --rc geninfo_unexecuted_blocks=1 00:05:41.534 00:05:41.534 ' 00:05:41.534 11:28:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.534 11:28:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.534 ************************************ 00:05:41.534 START TEST thread_poller_perf 00:05:41.534 ************************************ 00:05:41.534 11:28:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.534 [2024-11-20 11:28:44.881560] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:41.534 [2024-11-20 11:28:44.881611] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501557 ] 00:05:41.534 [2024-11-20 11:28:44.957901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.534 [2024-11-20 11:28:45.004430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.534 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.911 [2024-11-20T10:28:46.391Z] ====================================== 00:05:42.911 [2024-11-20T10:28:46.391Z] busy:2309353952 (cyc) 00:05:42.911 [2024-11-20T10:28:46.391Z] total_run_count: 426000 00:05:42.911 [2024-11-20T10:28:46.391Z] tsc_hz: 2300000000 (cyc) 00:05:42.911 [2024-11-20T10:28:46.391Z] ====================================== 00:05:42.911 [2024-11-20T10:28:46.391Z] poller_cost: 5421 (cyc), 2356 (nsec) 00:05:42.911 00:05:42.911 real 0m1.190s 00:05:42.911 user 0m1.112s 00:05:42.911 sys 0m0.074s 00:05:42.911 11:28:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.911 11:28:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.911 ************************************ 00:05:42.911 END TEST thread_poller_perf 00:05:42.911 ************************************ 00:05:42.911 11:28:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.911 11:28:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:42.911 11:28:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.911 11:28:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.911 ************************************ 00:05:42.911 START TEST thread_poller_perf 00:05:42.911 ************************************ 00:05:42.911 11:28:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.911 [2024-11-20 11:28:46.140128] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:42.911 [2024-11-20 11:28:46.140185] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501691 ] 00:05:42.911 [2024-11-20 11:28:46.217441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.911 [2024-11-20 11:28:46.263234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.911 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.845 [2024-11-20T10:28:47.325Z] ====================================== 00:05:43.845 [2024-11-20T10:28:47.325Z] busy:2301803026 (cyc) 00:05:43.845 [2024-11-20T10:28:47.325Z] total_run_count: 5600000 00:05:43.845 [2024-11-20T10:28:47.325Z] tsc_hz: 2300000000 (cyc) 00:05:43.845 [2024-11-20T10:28:47.325Z] ====================================== 00:05:43.845 [2024-11-20T10:28:47.325Z] poller_cost: 411 (cyc), 178 (nsec) 00:05:43.845 00:05:43.845 real 0m1.190s 00:05:43.845 user 0m1.104s 00:05:43.845 sys 0m0.083s 00:05:43.845 11:28:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.845 11:28:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.845 ************************************ 00:05:43.845 END TEST thread_poller_perf 00:05:43.845 ************************************ 00:05:44.102 11:28:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.103 00:05:44.103 real 0m2.712s 00:05:44.103 user 0m2.371s 00:05:44.103 sys 0m0.364s 00:05:44.103 11:28:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.103 11:28:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.103 ************************************ 00:05:44.103 END TEST thread 00:05:44.103 ************************************ 00:05:44.103 11:28:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.103 11:28:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.103 11:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.103 11:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.103 11:28:47 -- common/autotest_common.sh@10 -- # set +x 00:05:44.103 ************************************ 00:05:44.103 START TEST app_cmdline 00:05:44.103 ************************************ 00:05:44.103 11:28:47 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.103 * Looking for test storage... 00:05:44.103 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:44.103 11:28:47 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.103 11:28:47 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.103 11:28:47 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.361 11:28:47 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.361 11:28:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.362 11:28:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.362 --rc genhtml_branch_coverage=1 00:05:44.362 --rc genhtml_function_coverage=1 00:05:44.362 --rc genhtml_legend=1 00:05:44.362 --rc geninfo_all_blocks=1 00:05:44.362 --rc geninfo_unexecuted_blocks=1 00:05:44.362 00:05:44.362 ' 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.362 --rc genhtml_branch_coverage=1 00:05:44.362 --rc genhtml_function_coverage=1 00:05:44.362 --rc genhtml_legend=1 00:05:44.362 --rc geninfo_all_blocks=1 00:05:44.362 --rc geninfo_unexecuted_blocks=1 00:05:44.362 00:05:44.362 ' 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.362 --rc genhtml_branch_coverage=1 00:05:44.362 --rc genhtml_function_coverage=1 00:05:44.362 --rc genhtml_legend=1 00:05:44.362 --rc geninfo_all_blocks=1 00:05:44.362 --rc geninfo_unexecuted_blocks=1 00:05:44.362 00:05:44.362 ' 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.362 --rc genhtml_branch_coverage=1 00:05:44.362 --rc genhtml_function_coverage=1 00:05:44.362 --rc genhtml_legend=1 00:05:44.362 --rc geninfo_all_blocks=1 00:05:44.362 --rc geninfo_unexecuted_blocks=1 00:05:44.362 00:05:44.362 ' 00:05:44.362 11:28:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.362 11:28:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1502002 00:05:44.362 11:28:47 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.362 11:28:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1502002 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1502002 ']' 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.362 11:28:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.362 [2024-11-20 11:28:47.673221] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:44.362 [2024-11-20 11:28:47.673279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502002 ] 00:05:44.362 [2024-11-20 11:28:47.750901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.362 [2024-11-20 11:28:47.798498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.296 11:28:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.296 11:28:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:45.296 11:28:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:45.296 { 00:05:45.296 "version": "SPDK v25.01-pre git sha1 097badaeb", 00:05:45.296 "fields": { 00:05:45.296 "major": 25, 00:05:45.296 "minor": 1, 00:05:45.296 "patch": 0, 00:05:45.296 "suffix": "-pre", 00:05:45.296 "commit": "097badaeb" 00:05:45.296 } 00:05:45.296 } 00:05:45.296 11:28:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:45.297 11:28:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:45.297 11:28:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.555 request: 00:05:45.555 { 00:05:45.555 "method": "env_dpdk_get_mem_stats", 00:05:45.555 "req_id": 1 00:05:45.555 } 00:05:45.555 Got JSON-RPC error response 00:05:45.555 response: 00:05:45.555 { 00:05:45.555 "code": -32601, 00:05:45.555 "message": "Method not found" 00:05:45.555 } 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.555 11:28:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1502002 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1502002 ']' 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1502002 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502002 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502002' 00:05:45.555 killing process with pid 1502002 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 1502002 00:05:45.555 11:28:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 1502002 00:05:46.122 00:05:46.122 real 0m1.897s 00:05:46.122 user 0m2.186s 00:05:46.122 sys 0m0.558s 00:05:46.122 11:28:49 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.122 11:28:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 ************************************ 00:05:46.122 END TEST app_cmdline 00:05:46.122 ************************************ 00:05:46.122 11:28:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:46.122 11:28:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.122 11:28:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.122 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.122 ************************************ 00:05:46.122 START TEST version 00:05:46.122 ************************************ 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:46.122 * Looking for test storage... 00:05:46.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.122 11:28:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.122 11:28:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.122 11:28:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.122 11:28:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.122 11:28:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.122 11:28:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.122 11:28:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.122 11:28:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.122 11:28:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.122 11:28:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.122 11:28:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.122 11:28:49 version -- scripts/common.sh@344 -- # case "$op" in 00:05:46.122 11:28:49 version -- scripts/common.sh@345 -- # : 1 00:05:46.122 11:28:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.122 11:28:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.122 11:28:49 version -- scripts/common.sh@365 -- # decimal 1 00:05:46.122 11:28:49 version -- scripts/common.sh@353 -- # local d=1 00:05:46.122 11:28:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.122 11:28:49 version -- scripts/common.sh@355 -- # echo 1 00:05:46.122 11:28:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.122 11:28:49 version -- scripts/common.sh@366 -- # decimal 2 00:05:46.122 11:28:49 version -- scripts/common.sh@353 -- # local d=2 00:05:46.122 11:28:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.122 11:28:49 version -- scripts/common.sh@355 -- # echo 2 00:05:46.122 11:28:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.122 11:28:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.122 11:28:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.122 11:28:49 version -- scripts/common.sh@368 -- # return 0 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.122 --rc genhtml_branch_coverage=1 00:05:46.122 --rc genhtml_function_coverage=1 00:05:46.122 --rc genhtml_legend=1 00:05:46.122 --rc geninfo_all_blocks=1 00:05:46.122 --rc geninfo_unexecuted_blocks=1 00:05:46.122 00:05:46.122 ' 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.122 --rc genhtml_branch_coverage=1 00:05:46.122 --rc genhtml_function_coverage=1 00:05:46.122 --rc genhtml_legend=1 00:05:46.122 --rc geninfo_all_blocks=1 00:05:46.122 --rc geninfo_unexecuted_blocks=1 00:05:46.122 00:05:46.122 ' 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.122 --rc genhtml_branch_coverage=1 00:05:46.122 --rc genhtml_function_coverage=1 00:05:46.122 --rc genhtml_legend=1 00:05:46.122 --rc geninfo_all_blocks=1 00:05:46.122 --rc geninfo_unexecuted_blocks=1 00:05:46.122 00:05:46.122 ' 00:05:46.122 11:28:49 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.122 --rc genhtml_branch_coverage=1 00:05:46.122 --rc genhtml_function_coverage=1 00:05:46.122 --rc genhtml_legend=1 00:05:46.122 --rc geninfo_all_blocks=1 00:05:46.122 --rc geninfo_unexecuted_blocks=1 00:05:46.122 00:05:46.122 ' 00:05:46.382 11:28:49 version -- app/version.sh@17 -- # get_header_version major 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # cut -f2 00:05:46.382 11:28:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.382 11:28:49 version -- app/version.sh@17 -- # major=25 00:05:46.382 11:28:49 version -- app/version.sh@18 -- # get_header_version minor 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # cut -f2 00:05:46.382 11:28:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.382 11:28:49 version -- app/version.sh@18 -- # minor=1 00:05:46.382 11:28:49 version -- app/version.sh@19 -- # get_header_version patch 00:05:46.382 11:28:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # cut -f2 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.382 11:28:49 version -- app/version.sh@19 -- # patch=0 00:05:46.382 11:28:49 version -- app/version.sh@20 -- # get_header_version suffix 00:05:46.382 11:28:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # cut -f2 00:05:46.382 11:28:49 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.382 11:28:49 version -- app/version.sh@20 -- # suffix=-pre 00:05:46.382 11:28:49 version -- app/version.sh@22 -- # version=25.1 00:05:46.382 11:28:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:46.382 11:28:49 version -- app/version.sh@28 -- # version=25.1rc0 00:05:46.382 11:28:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:46.382 11:28:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:46.382 11:28:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:46.382 11:28:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:46.382 00:05:46.382 real 0m0.273s 00:05:46.382 user 0m0.145s 00:05:46.382 sys 0m0.175s 00:05:46.382 11:28:49 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.382 11:28:49 version -- common/autotest_common.sh@10 -- # set +x 00:05:46.382 ************************************ 00:05:46.382 END TEST version 00:05:46.382 ************************************ 00:05:46.382 11:28:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:46.383 11:28:49 -- spdk/autotest.sh@194 -- # uname -s 00:05:46.383 11:28:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:46.383 11:28:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.383 11:28:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.383 11:28:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:46.383 11:28:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.383 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.383 11:28:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:46.383 11:28:49 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:46.383 11:28:49 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:46.383 11:28:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.383 11:28:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.383 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:46.383 ************************************ 00:05:46.383 START TEST nvmf_rdma 00:05:46.383 ************************************ 00:05:46.383 11:28:49 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:46.642 * Looking for test storage... 00:05:46.642 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.642 11:28:49 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.642 --rc genhtml_branch_coverage=1 00:05:46.642 --rc genhtml_function_coverage=1 00:05:46.642 --rc genhtml_legend=1 00:05:46.642 --rc geninfo_all_blocks=1 00:05:46.642 --rc geninfo_unexecuted_blocks=1 00:05:46.642 00:05:46.642 ' 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.642 --rc genhtml_branch_coverage=1 00:05:46.642 --rc genhtml_function_coverage=1 00:05:46.642 --rc genhtml_legend=1 00:05:46.642 --rc geninfo_all_blocks=1 00:05:46.642 --rc geninfo_unexecuted_blocks=1 00:05:46.642 00:05:46.642 ' 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.642 --rc genhtml_branch_coverage=1 00:05:46.642 --rc genhtml_function_coverage=1 00:05:46.642 --rc genhtml_legend=1 00:05:46.642 --rc geninfo_all_blocks=1 00:05:46.642 --rc geninfo_unexecuted_blocks=1 00:05:46.642 00:05:46.642 ' 00:05:46.642 11:28:49 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.642 --rc genhtml_branch_coverage=1 00:05:46.642 --rc genhtml_function_coverage=1 00:05:46.642 --rc genhtml_legend=1 00:05:46.642 --rc geninfo_all_blocks=1 00:05:46.642 --rc geninfo_unexecuted_blocks=1 00:05:46.642 00:05:46.642 ' 00:05:46.643 11:28:49 nvmf_rdma -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:46.643 11:28:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.643 11:28:49 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.643 11:28:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:46.643 ************************************ 00:05:46.643 START TEST nvmf_target_core 00:05:46.643 ************************************ 00:05:46.643 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:46.643 * Looking for test storage... 00:05:46.902 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:46.902 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.903 --rc genhtml_branch_coverage=1 00:05:46.903 --rc genhtml_function_coverage=1 00:05:46.903 --rc genhtml_legend=1 00:05:46.903 --rc geninfo_all_blocks=1 00:05:46.903 --rc geninfo_unexecuted_blocks=1 00:05:46.903 00:05:46.903 ' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.903 --rc genhtml_branch_coverage=1 00:05:46.903 --rc genhtml_function_coverage=1 00:05:46.903 --rc genhtml_legend=1 00:05:46.903 --rc geninfo_all_blocks=1 00:05:46.903 --rc geninfo_unexecuted_blocks=1 00:05:46.903 00:05:46.903 ' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.903 --rc genhtml_branch_coverage=1 00:05:46.903 --rc genhtml_function_coverage=1 00:05:46.903 --rc genhtml_legend=1 00:05:46.903 --rc geninfo_all_blocks=1 00:05:46.903 --rc geninfo_unexecuted_blocks=1 00:05:46.903 00:05:46.903 ' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.903 --rc genhtml_branch_coverage=1 00:05:46.903 --rc genhtml_function_coverage=1 00:05:46.903 --rc genhtml_legend=1 00:05:46.903 --rc geninfo_all_blocks=1 00:05:46.903 --rc geninfo_unexecuted_blocks=1 00:05:46.903 00:05:46.903 ' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:46.903 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.903 ************************************ 00:05:46.903 START TEST nvmf_abort 00:05:46.903 ************************************ 00:05:46.903 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:47.163 * Looking for test storage... 00:05:47.163 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.163 --rc genhtml_branch_coverage=1 00:05:47.163 --rc genhtml_function_coverage=1 00:05:47.163 --rc genhtml_legend=1 00:05:47.163 --rc geninfo_all_blocks=1 00:05:47.163 --rc geninfo_unexecuted_blocks=1 00:05:47.163 00:05:47.163 ' 00:05:47.163 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.163 --rc genhtml_branch_coverage=1 00:05:47.163 --rc genhtml_function_coverage=1 00:05:47.163 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:47.164 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:05:47.164 11:28:50 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:53.769 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:53.769 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:53.769 Found net devices under 0000:18:00.0: mlx_0_0 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:53.769 Found net devices under 0000:18:00.1: mlx_0_1 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # get_rdma_if_list 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # rdma_devs=() 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:05:53.769 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@89 -- # continue 2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@89 -- # continue 2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@61 -- # uname 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_cm 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_core 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_umad 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe iw_cm 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # key_initiator=target1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:05:53.770 10.0.0.1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:05:53.770 10.0.0.2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:05:53.770 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:53.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:05:53.771 00:05:53.771 --- 10.0.0.2 ping statistics --- 00:05:53.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.771 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:53.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:05:53.771 00:05:53.771 --- 10.0.0.2 ping statistics --- 00:05:53.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.771 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:05:53.771 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1505324 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1505324 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1505324 ']' 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.772 11:28:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.772 [2024-11-20 11:28:56.989081] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:53.772 [2024-11-20 11:28:56.989146] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.772 [2024-11-20 11:28:57.073201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.772 [2024-11-20 11:28:57.121412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.772 [2024-11-20 11:28:57.121459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.772 [2024-11-20 11:28:57.121472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.772 [2024-11-20 11:28:57.121481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.772 [2024-11-20 11:28:57.121489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.772 [2024-11-20 11:28:57.122671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.772 [2024-11-20 11:28:57.122750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.772 [2024-11-20 11:28:57.122752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 [2024-11-20 11:28:57.309268] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17509f0/0x1754ee0) succeed. 00:05:54.051 [2024-11-20 11:28:57.324813] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1751fe0/0x1796580) succeed. 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 Malloc0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 Delay0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 [2024-11-20 11:28:57.502617] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.051 11:28:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:54.309 [2024-11-20 11:28:57.608112] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:56.837 Initializing NVMe Controllers 00:05:56.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:56.837 controller IO queue size 128 less than required 00:05:56.837 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:56.837 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:56.837 Initialization complete. Launching workers. 00:05:56.837 NS: RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41731 00:05:56.837 CTRLR: RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41792, failed to submit 62 00:05:56.837 success 41732, unsuccessful 60, failed 0 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:05:56.837 rmmod nvme_rdma 00:05:56.837 rmmod nvme_fabrics 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1505324 ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1505324 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1505324 ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1505324 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505324 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505324' 00:05:56.837 killing process with pid 1505324 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1505324 00:05:56.837 11:28:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1505324 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:05:56.837 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:05:56.838 00:05:56.838 real 0m9.865s 00:05:56.838 user 0m12.874s 00:05:56.838 sys 0m5.388s 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.838 ************************************ 00:05:56.838 END TEST nvmf_abort 00:05:56.838 ************************************ 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:56.838 ************************************ 00:05:56.838 START TEST nvmf_ns_hotplug_stress 00:05:56.838 ************************************ 00:05:56.838 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:57.097 * Looking for test storage... 00:05:57.097 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.097 --rc genhtml_branch_coverage=1 00:05:57.097 --rc genhtml_function_coverage=1 00:05:57.097 --rc genhtml_legend=1 00:05:57.097 --rc geninfo_all_blocks=1 00:05:57.097 --rc geninfo_unexecuted_blocks=1 00:05:57.097 00:05:57.097 ' 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.097 --rc genhtml_branch_coverage=1 00:05:57.097 --rc genhtml_function_coverage=1 00:05:57.097 --rc genhtml_legend=1 00:05:57.097 --rc geninfo_all_blocks=1 00:05:57.097 --rc geninfo_unexecuted_blocks=1 00:05:57.097 00:05:57.097 ' 00:05:57.097 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:57.098 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:05:57.098 11:29:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:03.659 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:03.660 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:03.660 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:03.660 Found net devices under 0000:18:00.0: mlx_0_0 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:03.660 Found net devices under 0000:18:00.1: mlx_0_1 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # get_rdma_if_list 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # rdma_devs=() 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:06:03.660 11:29:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@89 -- # continue 2 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@89 -- # continue 2 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@61 -- # uname 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_cm 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_core 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_umad 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe iw_cm 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # key_initiator=target1 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:03.660 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:06:03.661 10.0.0.1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:06:03.661 10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:03.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:06:03.661 00:06:03.661 --- 10.0.0.2 ping statistics --- 00:06:03.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.661 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:03.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:06:03.661 00:06:03.661 --- 10.0.0.2 ping statistics --- 00:06:03.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.661 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:06:03.661 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1508623 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1508623 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1508623 ']' 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.662 [2024-11-20 11:29:06.371430] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:03.662 [2024-11-20 11:29:06.371488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.662 [2024-11-20 11:29:06.451383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.662 [2024-11-20 11:29:06.498268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.662 [2024-11-20 11:29:06.498309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.662 [2024-11-20 11:29:06.498320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.662 [2024-11-20 11:29:06.498329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.662 [2024-11-20 11:29:06.498337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.662 [2024-11-20 11:29:06.499594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.662 [2024-11-20 11:29:06.499616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.662 [2024-11-20 11:29:06.499618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:03.662 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:03.663 11:29:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:03.663 [2024-11-20 11:29:06.867705] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff19f0/0x1ff5ee0) succeed. 00:06:03.663 [2024-11-20 11:29:06.876911] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff2fe0/0x2037580) succeed. 00:06:03.663 11:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:03.920 11:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:06:03.920 [2024-11-20 11:29:07.371768] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:06:04.177 11:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:06:04.177 11:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:04.435 Malloc0 00:06:04.435 11:29:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:04.692 Delay0 00:06:04.692 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.949 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:04.950 NULL1 00:06:04.950 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:05.207 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:05.207 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1509009 00:06:05.207 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:05.207 11:29:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.578 Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 11:29:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.578 11:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:06.578 11:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:06.835 true 00:06:06.835 11:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:06.835 11:29:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 11:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.767 11:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:07.767 11:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:08.024 true 00:06:08.024 11:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:08.024 11:29:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 11:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:09.214 11:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:09.214 11:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:09.214 true 00:06:09.214 11:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:09.214 11:29:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 11:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.403 11:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:10.403 11:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:10.403 true 00:06:10.403 11:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:10.403 11:29:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 11:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.591 11:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:11.591 11:29:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:11.591 true 00:06:11.848 11:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:11.848 11:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 11:29:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.779 11:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:12.779 11:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:13.036 true 00:06:13.036 11:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:13.036 11:29:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 11:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.968 11:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:13.968 11:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:14.225 true 00:06:14.225 11:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:14.225 11:29:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 11:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.411 11:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:15.411 11:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:15.411 true 00:06:15.411 11:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:15.411 11:29:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 11:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.598 11:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:16.598 11:29:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:16.598 true 00:06:16.598 11:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:16.598 11:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.529 11:29:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.786 11:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:17.786 11:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:17.786 true 00:06:17.786 11:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:17.786 11:29:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.718 11:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.975 11:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:18.975 11:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:19.232 true 00:06:19.232 11:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:19.232 11:29:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 11:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.054 11:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:20.054 11:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:20.311 true 00:06:20.311 11:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:20.311 11:29:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 11:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.239 11:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:21.239 11:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:21.495 true 00:06:21.495 11:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:21.495 11:29:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 11:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.681 11:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:22.681 11:29:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:22.681 true 00:06:22.681 11:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:22.681 11:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.612 11:29:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.869 11:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:23.869 11:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:23.869 true 00:06:24.126 11:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:24.126 11:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.689 11:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.946 11:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:24.946 11:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.204 true 00:06:25.204 11:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:25.204 11:29:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.135 11:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.136 11:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:26.136 11:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:26.393 true 00:06:26.393 11:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:26.393 11:29:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 11:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.324 11:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:27.324 11:29:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:27.581 true 00:06:27.581 11:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:27.581 11:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 11:29:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.513 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.772 11:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:28.772 11:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:28.772 true 00:06:28.772 11:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:28.772 11:29:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.759 11:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.759 11:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:29.759 11:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:30.016 true 00:06:30.016 11:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:30.016 11:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.948 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 11:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.206 11:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:31.206 11:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:31.206 true 00:06:31.206 11:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:31.206 11:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 11:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.396 11:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:32.396 11:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:32.396 true 00:06:32.396 11:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:32.396 11:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 11:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.587 11:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:33.587 11:29:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:33.587 true 00:06:33.844 11:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:33.844 11:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.409 11:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.667 11:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:34.667 11:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:34.925 true 00:06:34.925 11:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:34.925 11:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.856 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.856 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:35.856 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:36.114 true 00:06:36.114 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:36.114 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.371 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.628 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:36.628 11:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:36.628 true 00:06:36.886 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:36.886 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.886 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.143 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:37.143 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:37.401 true 00:06:37.401 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:37.401 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.659 11:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.659 Initializing NVMe Controllers 00:06:37.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.659 Controller IO queue size 128, less than required. 00:06:37.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.659 Controller IO queue size 128, less than required. 00:06:37.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.659 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.659 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:37.659 Initialization complete. Launching workers. 00:06:37.659 ======================================================== 00:06:37.659 Latency(us) 00:06:37.659 Device Information : IOPS MiB/s Average min max 00:06:37.659 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6165.80 3.01 18250.90 1004.24 1138932.59 00:06:37.659 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33089.43 16.16 3868.18 2257.20 295555.99 00:06:37.659 ======================================================== 00:06:37.659 Total : 39255.23 19.17 6127.27 1004.24 1138932.59 00:06:37.659 00:06:37.916 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:37.916 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:37.916 true 00:06:37.916 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509009 00:06:37.916 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1509009) - No such process 00:06:37.916 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1509009 00:06:37.916 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.173 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.430 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:38.430 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:38.430 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:38.430 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.430 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:38.688 null0 00:06:38.688 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.688 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.688 11:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:38.688 null1 00:06:38.688 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.688 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.688 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:38.946 null2 00:06:38.946 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.946 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.946 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:39.203 null3 00:06:39.203 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.203 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.203 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:39.461 null4 00:06:39.461 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.461 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.461 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:39.718 null5 00:06:39.718 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.718 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.718 11:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:39.718 null6 00:06:39.718 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.718 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.718 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:39.976 null7 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.976 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1513794 1513797 1513800 1513804 1513807 1513811 1513813 1513817 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.977 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.236 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.494 11:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.752 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.011 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.527 11:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.784 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.785 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.785 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.042 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.300 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.301 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.301 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.301 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.301 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.301 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.558 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.558 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.559 11:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.847 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.105 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.363 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.364 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.622 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.622 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.622 11:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.622 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.880 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.881 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.139 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.140 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:06:44.398 rmmod nvme_rdma 00:06:44.398 rmmod nvme_fabrics 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1508623 ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1508623 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1508623 ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1508623 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508623 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508623' 00:06:44.398 killing process with pid 1508623 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1508623 00:06:44.398 11:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1508623 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:06:44.658 00:06:44.658 real 0m47.840s 00:06:44.658 user 3m22.595s 00:06:44.658 sys 0m13.668s 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:44.658 ************************************ 00:06:44.658 END TEST nvmf_ns_hotplug_stress 00:06:44.658 ************************************ 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.658 11:29:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.918 ************************************ 00:06:44.918 START TEST nvmf_delete_subsystem 00:06:44.918 ************************************ 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:44.918 * Looking for test storage... 00:06:44.918 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.918 --rc genhtml_branch_coverage=1 00:06:44.918 --rc genhtml_function_coverage=1 00:06:44.918 --rc genhtml_legend=1 00:06:44.918 --rc geninfo_all_blocks=1 00:06:44.918 --rc geninfo_unexecuted_blocks=1 00:06:44.918 00:06:44.918 ' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.918 --rc genhtml_branch_coverage=1 00:06:44.918 --rc genhtml_function_coverage=1 00:06:44.918 --rc genhtml_legend=1 00:06:44.918 --rc geninfo_all_blocks=1 00:06:44.918 --rc geninfo_unexecuted_blocks=1 00:06:44.918 00:06:44.918 ' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.918 --rc genhtml_branch_coverage=1 00:06:44.918 --rc genhtml_function_coverage=1 00:06:44.918 --rc genhtml_legend=1 00:06:44.918 --rc geninfo_all_blocks=1 00:06:44.918 --rc geninfo_unexecuted_blocks=1 00:06:44.918 00:06:44.918 ' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.918 --rc genhtml_branch_coverage=1 00:06:44.918 --rc genhtml_function_coverage=1 00:06:44.918 --rc genhtml_legend=1 00:06:44.918 --rc geninfo_all_blocks=1 00:06:44.918 --rc geninfo_unexecuted_blocks=1 00:06:44.918 00:06:44.918 ' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:44.918 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:44.919 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:06:44.919 11:29:48 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:51.486 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:51.486 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:51.486 Found net devices under 0000:18:00.0: mlx_0_0 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:51.486 Found net devices under 0000:18:00.1: mlx_0_1 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # get_rdma_if_list 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # rdma_devs=() 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:06:51.486 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@89 -- # continue 2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@89 -- # continue 2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@61 -- # uname 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_cm 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_core 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_umad 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe iw_cm 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # key_initiator=target1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:06:51.487 10.0.0.1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:06:51.487 10.0.0.2 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:51.487 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:51.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:06:51.488 00:06:51.488 --- 10.0.0.2 ping statistics --- 00:06:51.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.488 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:51.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:06:51.488 00:06:51.488 --- 10.0.0.2 ping statistics --- 00:06:51.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.488 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:51.488 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1517473 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1517473 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1517473 ']' 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:51.489 [2024-11-20 11:29:54.688698] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:51.489 [2024-11-20 11:29:54.688750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.489 [2024-11-20 11:29:54.766808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.489 [2024-11-20 11:29:54.814020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.489 [2024-11-20 11:29:54.814066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.489 [2024-11-20 11:29:54.814076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.489 [2024-11-20 11:29:54.814084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.489 [2024-11-20 11:29:54.814091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.489 [2024-11-20 11:29:54.815180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.489 [2024-11-20 11:29:54.815183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.489 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.748 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:51.748 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 [2024-11-20 11:29:54.993637] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1225b60/0x122a050) succeed. 00:06:51.748 [2024-11-20 11:29:55.002647] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12270b0/0x126b6f0) succeed. 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 [2024-11-20 11:29:55.096328] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 NULL1 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 Delay0 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1517497 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:51.748 11:29:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:51.748 [2024-11-20 11:29:55.203277] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:54.278 11:29:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.278 11:29:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.278 11:29:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 NVMe io qpair process completion error 00:06:54.844 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.844 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:54.844 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517497 00:06:54.844 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.411 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.411 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517497 00:06:55.411 11:29:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 starting I/O failed: -6 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Write completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.979 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 starting I/O failed: -6 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.980 Write completed with error (sct=0, sc=8) 00:06:55.980 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Write completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Read completed with error (sct=0, sc=8) 00:06:55.981 Initializing NVMe Controllers 00:06:55.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.981 Controller IO queue size 128, less than required. 00:06:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.981 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.981 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.981 Initialization complete. Launching workers. 00:06:55.981 ======================================================== 00:06:55.981 Latency(us) 00:06:55.981 Device Information : IOPS MiB/s Average min max 00:06:55.981 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 72.43 0.04 1768619.11 1000155.10 2977884.32 00:06:55.981 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 88.53 0.04 1453803.26 1000372.29 2978125.53 00:06:55.981 ======================================================== 00:06:55.981 Total : 160.96 0.08 1595470.39 1000155.10 2978125.53 00:06:55.981 00:06:55.981 [2024-11-20 11:29:59.287435] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:06:55.981 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:55.981 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517497 00:06:55.981 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:55.981 [2024-11-20 11:29:59.301884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:55.981 [2024-11-20 11:29:59.301907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:55.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1517497 00:06:56.549 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1517497) - No such process 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1517497 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1517497 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1517497 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.549 [2024-11-20 11:29:59.820606] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1518223 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:56.549 11:29:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:56.549 [2024-11-20 11:29:59.917429] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:57.115 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.115 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:57.115 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.374 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.374 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:57.374 11:30:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.942 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:57.942 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:57.942 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.506 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.506 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:58.506 11:30:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.072 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.072 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:59.072 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.638 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.638 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:59.638 11:30:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.897 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.897 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:06:59.897 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.462 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.462 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:00.462 11:30:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.029 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.029 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:01.029 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.595 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.595 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:01.595 11:30:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.162 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.162 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:02.162 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.747 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.747 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:02.747 11:30:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.029 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.029 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:03.029 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.603 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.603 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:03.603 11:30:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.603 Initializing NVMe Controllers 00:07:03.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.603 Controller IO queue size 128, less than required. 00:07:03.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.603 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:03.603 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:03.603 Initialization complete. Launching workers. 00:07:03.603 ======================================================== 00:07:03.603 Latency(us) 00:07:03.603 Device Information : IOPS MiB/s Average min max 00:07:03.603 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001503.70 1000047.94 1004123.38 00:07:03.603 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002672.02 1000070.35 1006305.07 00:07:03.603 ======================================================== 00:07:03.603 Total : 256.00 0.12 1002087.86 1000047.94 1006305.07 00:07:03.603 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1518223 00:07:04.168 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1518223) - No such process 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1518223 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:07:04.168 rmmod nvme_rdma 00:07:04.168 rmmod nvme_fabrics 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1517473 ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1517473 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1517473 ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1517473 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517473 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517473' 00:07:04.168 killing process with pid 1517473 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1517473 00:07:04.168 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1517473 00:07:04.426 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:04.426 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:07:04.426 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:07:04.426 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:04.426 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:07:04.427 00:07:04.427 real 0m19.651s 00:07:04.427 user 0m48.840s 00:07:04.427 sys 0m6.033s 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.427 ************************************ 00:07:04.427 END TEST nvmf_delete_subsystem 00:07:04.427 ************************************ 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.427 ************************************ 00:07:04.427 START TEST nvmf_host_management 00:07:04.427 ************************************ 00:07:04.427 11:30:07 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:07:04.687 * Looking for test storage... 00:07:04.687 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.687 --rc genhtml_branch_coverage=1 00:07:04.687 --rc genhtml_function_coverage=1 00:07:04.687 --rc genhtml_legend=1 00:07:04.687 --rc geninfo_all_blocks=1 00:07:04.687 --rc geninfo_unexecuted_blocks=1 00:07:04.687 00:07:04.687 ' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.687 --rc genhtml_branch_coverage=1 00:07:04.687 --rc genhtml_function_coverage=1 00:07:04.687 --rc genhtml_legend=1 00:07:04.687 --rc geninfo_all_blocks=1 00:07:04.687 --rc geninfo_unexecuted_blocks=1 00:07:04.687 00:07:04.687 ' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.687 --rc genhtml_branch_coverage=1 00:07:04.687 --rc genhtml_function_coverage=1 00:07:04.687 --rc genhtml_legend=1 00:07:04.687 --rc geninfo_all_blocks=1 00:07:04.687 --rc geninfo_unexecuted_blocks=1 00:07:04.687 00:07:04.687 ' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.687 --rc genhtml_branch_coverage=1 00:07:04.687 --rc genhtml_function_coverage=1 00:07:04.687 --rc genhtml_legend=1 00:07:04.687 --rc geninfo_all_blocks=1 00:07:04.687 --rc geninfo_unexecuted_blocks=1 00:07:04.687 00:07:04.687 ' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.687 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:04.688 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:07:04.688 11:30:08 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:11.246 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:11.246 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:11.246 Found net devices under 0000:18:00.0: mlx_0_0 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:11.246 Found net devices under 0000:18:00.1: mlx_0_1 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # get_rdma_if_list 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # rdma_devs=() 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.246 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@89 -- # continue 2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@89 -- # continue 2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@61 -- # uname 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_cm 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_core 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_umad 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe iw_cm 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # key_initiator=target1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:07:11.247 10.0.0.1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:07:11.247 10.0.0.2 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:11.247 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:11.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:07:11.248 00:07:11.248 --- 10.0.0.2 ping statistics --- 00:07:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.248 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:11.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:07:11.248 00:07:11.248 --- 10.0.0.2 ping statistics --- 00:07:11.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.248 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:11.248 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:11.249 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1522713 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1522713 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1522713 ']' 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.508 11:30:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.508 [2024-11-20 11:30:14.821906] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:11.508 [2024-11-20 11:30:14.821969] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.508 [2024-11-20 11:30:14.902208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.508 [2024-11-20 11:30:14.953165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.508 [2024-11-20 11:30:14.953204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.508 [2024-11-20 11:30:14.953215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.508 [2024-11-20 11:30:14.953223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.508 [2024-11-20 11:30:14.953231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.508 [2024-11-20 11:30:14.954739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.508 [2024-11-20 11:30:14.954817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.508 [2024-11-20 11:30:14.954847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.508 [2024-11-20 11:30:14.954847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.767 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.767 [2024-11-20 11:30:15.131488] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11d8520/0x11dca10) succeed. 00:07:11.767 [2024-11-20 11:30:15.140716] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11d9bb0/0x121e0b0) succeed. 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.026 Malloc0 00:07:12.026 [2024-11-20 11:30:15.344734] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1522795 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1522795 /var/tmp/bdevperf.sock 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1522795 ']' 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:12.026 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:12.026 { 00:07:12.027 "params": { 00:07:12.027 "name": "Nvme$subsystem", 00:07:12.027 "trtype": "$TEST_TRANSPORT", 00:07:12.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:12.027 "adrfam": "ipv4", 00:07:12.027 "trsvcid": "$NVMF_PORT", 00:07:12.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:12.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:12.027 "hdgst": ${hdgst:-false}, 00:07:12.027 "ddgst": ${ddgst:-false} 00:07:12.027 }, 00:07:12.027 "method": "bdev_nvme_attach_controller" 00:07:12.027 } 00:07:12.027 EOF 00:07:12.027 )") 00:07:12.027 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:12.027 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:12.027 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:12.027 11:30:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:12.027 "params": { 00:07:12.027 "name": "Nvme0", 00:07:12.027 "trtype": "rdma", 00:07:12.027 "traddr": "10.0.0.2", 00:07:12.027 "adrfam": "ipv4", 00:07:12.027 "trsvcid": "4420", 00:07:12.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:12.027 "hdgst": false, 00:07:12.027 "ddgst": false 00:07:12.027 }, 00:07:12.027 "method": "bdev_nvme_attach_controller" 00:07:12.027 }' 00:07:12.027 [2024-11-20 11:30:15.454893] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:12.027 [2024-11-20 11:30:15.454952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522795 ] 00:07:12.285 [2024-11-20 11:30:15.533863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.285 [2024-11-20 11:30:15.579176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.285 Running I/O for 10 seconds... 00:07:12.852 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.852 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:12.852 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:12.852 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.852 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1707 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1707 -ge 100 ']' 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.112 11:30:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:13.938 1800.00 IOPS, 112.50 MiB/s [2024-11-20T10:30:17.418Z] [2024-11-20 11:30:17.390004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001662f980 len:0x10000 key:0x1c0700 00:07:13.938 [2024-11-20 11:30:17.390044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001661f900 len:0x10000 key:0x1c0700 00:07:13.938 [2024-11-20 11:30:17.390080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001660f880 len:0x10000 key:0x1c0700 00:07:13.938 [2024-11-20 11:30:17.390102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000164eff80 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000164dff00 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000164cfe80 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000164bfe00 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000164afd80 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001649fd00 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001648fc80 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.938 [2024-11-20 11:30:17.390256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001647fc00 len:0x10000 key:0x1c0600 00:07:13.938 [2024-11-20 11:30:17.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001646fb80 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001645fb00 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001644fa80 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001643fa00 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001642f980 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001641f900 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001640f880 len:0x10000 key:0x1c0600 00:07:13.939 [2024-11-20 11:30:17.390411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084b100 len:0x10000 key:0x1bfe00 00:07:13.939 [2024-11-20 11:30:17.390431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083b080 len:0x10000 key:0x1bfe00 00:07:13.939 [2024-11-20 11:30:17.390451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082b000 len:0x10000 key:0x1bfe00 00:07:13.939 [2024-11-20 11:30:17.390472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081af80 len:0x10000 key:0x1bfe00 00:07:13.939 [2024-11-20 11:30:17.390492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080af00 len:0x10000 key:0x1bfe00 00:07:13.939 [2024-11-20 11:30:17.390512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106e4580 len:0x10000 key:0x1c0500 00:07:13.939 [2024-11-20 11:30:17.390532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b56f000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b54e000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b52d000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b50c000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4eb000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4ca000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b4a9000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b488000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b467000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b446000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b425000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b404000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b3e3000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b3c2000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b3a1000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b380000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77f000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75e000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b73d000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b71c000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6fb000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6da000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.390984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.390995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b9000 len:0x10000 key:0x1c0400 00:07:13.939 [2024-11-20 11:30:17.391006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.939 [2024-11-20 11:30:17.391017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b698000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b677000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b656000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b635000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b614000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5f3000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5d2000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5b1000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b590000 len:0x10000 key:0x1c0400 00:07:13.940 [2024-11-20 11:30:17.391190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d4500 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c4480 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b4400 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a4380 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010694300 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010684280 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010674200 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.391342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010664180 len:0x10000 key:0x1c0500 00:07:13.940 [2024-11-20 11:30:17.391351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec1de000 sqhd:7250 p:0 m:0 dnr:0 00:07:13.940 [2024-11-20 11:30:17.394078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1522795 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:13.940 { 00:07:13.940 "params": { 00:07:13.940 "name": "Nvme$subsystem", 00:07:13.940 "trtype": "$TEST_TRANSPORT", 00:07:13.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:13.940 "adrfam": "ipv4", 00:07:13.940 "trsvcid": "$NVMF_PORT", 00:07:13.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:13.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:13.940 "hdgst": ${hdgst:-false}, 00:07:13.940 "ddgst": ${ddgst:-false} 00:07:13.940 }, 00:07:13.940 "method": "bdev_nvme_attach_controller" 00:07:13.940 } 00:07:13.940 EOF 00:07:13.940 )") 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:13.940 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:14.199 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:14.199 11:30:17 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:14.199 "params": { 00:07:14.199 "name": "Nvme0", 00:07:14.199 "trtype": "rdma", 00:07:14.199 "traddr": "10.0.0.2", 00:07:14.199 "adrfam": "ipv4", 00:07:14.199 "trsvcid": "4420", 00:07:14.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.199 "hdgst": false, 00:07:14.199 "ddgst": false 00:07:14.199 }, 00:07:14.199 "method": "bdev_nvme_attach_controller" 00:07:14.199 }' 00:07:14.199 [2024-11-20 11:30:17.446676] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:14.199 [2024-11-20 11:30:17.446739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523072 ] 00:07:14.199 [2024-11-20 11:30:17.528286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.199 [2024-11-20 11:30:17.574127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.457 Running I/O for 1 seconds... 00:07:15.393 2985.00 IOPS, 186.56 MiB/s 00:07:15.393 Latency(us) 00:07:15.393 [2024-11-20T10:30:18.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.393 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.393 Verification LBA range: start 0x0 length 0x400 00:07:15.393 Nvme0n1 : 1.01 3028.76 189.30 0.00 0.00 20697.87 658.92 41715.09 00:07:15.393 [2024-11-20T10:30:18.873Z] =================================================================================================================== 00:07:15.393 [2024-11-20T10:30:18.873Z] Total : 3028.76 189.30 0.00 0.00 20697.87 658.92 41715.09 00:07:15.653 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1522795 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:15.653 11:30:18 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:07:15.653 rmmod nvme_rdma 00:07:15.653 rmmod nvme_fabrics 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1522713 ']' 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1522713 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1522713 ']' 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1522713 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1522713 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1522713' 00:07:15.653 killing process with pid 1522713 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1522713 00:07:15.653 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1522713 00:07:15.911 [2024-11-20 11:30:19.367415] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:15.911 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:16.170 00:07:16.170 real 0m11.528s 00:07:16.170 user 0m23.046s 00:07:16.170 sys 0m6.049s 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.170 ************************************ 00:07:16.170 END TEST nvmf_host_management 00:07:16.170 ************************************ 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.170 ************************************ 00:07:16.170 START TEST nvmf_lvol 00:07:16.170 ************************************ 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:07:16.170 * Looking for test storage... 00:07:16.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.170 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.171 --rc genhtml_branch_coverage=1 00:07:16.171 --rc genhtml_function_coverage=1 00:07:16.171 --rc genhtml_legend=1 00:07:16.171 --rc geninfo_all_blocks=1 00:07:16.171 --rc geninfo_unexecuted_blocks=1 00:07:16.171 00:07:16.171 ' 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.171 --rc genhtml_branch_coverage=1 00:07:16.171 --rc genhtml_function_coverage=1 00:07:16.171 --rc genhtml_legend=1 00:07:16.171 --rc geninfo_all_blocks=1 00:07:16.171 --rc geninfo_unexecuted_blocks=1 00:07:16.171 00:07:16.171 ' 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.171 --rc genhtml_branch_coverage=1 00:07:16.171 --rc genhtml_function_coverage=1 00:07:16.171 --rc genhtml_legend=1 00:07:16.171 --rc geninfo_all_blocks=1 00:07:16.171 --rc geninfo_unexecuted_blocks=1 00:07:16.171 00:07:16.171 ' 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.171 --rc genhtml_branch_coverage=1 00:07:16.171 --rc genhtml_function_coverage=1 00:07:16.171 --rc genhtml_legend=1 00:07:16.171 --rc geninfo_all_blocks=1 00:07:16.171 --rc geninfo_unexecuted_blocks=1 00:07:16.171 00:07:16.171 ' 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.171 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:16.431 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:16.432 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:07:16.432 11:30:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:07:22.993 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:22.994 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:22.994 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:22.994 Found net devices under 0000:18:00.0: mlx_0_0 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:22.994 Found net devices under 0000:18:00.1: mlx_0_1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # get_rdma_if_list 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # rdma_devs=() 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@89 -- # continue 2 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@89 -- # continue 2 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@61 -- # uname 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_cm 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_core 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_umad 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe iw_cm 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # key_initiator=target1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:07:22.994 10.0.0.1 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:07:22.994 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:22.995 10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:22.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:07:22.995 00:07:22.995 --- 10.0.0.2 ping statistics --- 00:07:22.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.995 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:22.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:07:22.995 00:07:22.995 --- 10.0.0.2 ping statistics --- 00:07:22.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.995 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:07:22.995 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1526276 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1526276 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1526276 ']' 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:22.996 11:30:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.996 [2024-11-20 11:30:25.976885] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:22.996 [2024-11-20 11:30:25.976948] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.996 [2024-11-20 11:30:26.058105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.996 [2024-11-20 11:30:26.108481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.996 [2024-11-20 11:30:26.108519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.996 [2024-11-20 11:30:26.108529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.996 [2024-11-20 11:30:26.108554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.996 [2024-11-20 11:30:26.108561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.996 [2024-11-20 11:30:26.109858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.996 [2024-11-20 11:30:26.110087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.996 [2024-11-20 11:30:26.110089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.562 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.563 11:30:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:23.821 [2024-11-20 11:30:27.067770] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8f56f0/0x8f9be0) succeed. 00:07:23.821 [2024-11-20 11:30:27.076815] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8f6ce0/0x93b280) succeed. 00:07:23.821 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.079 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:24.079 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:24.338 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:24.338 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:24.338 11:30:27 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:24.597 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f5e094e2-c942-46a6-b65f-6818399caf65 00:07:24.597 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5e094e2-c942-46a6-b65f-6818399caf65 lvol 20 00:07:24.855 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8469fd2b-7ef6-4fda-9f9c-f02e97cdab94 00:07:24.855 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.113 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8469fd2b-7ef6-4fda-9f9c-f02e97cdab94 00:07:25.372 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:07:25.372 [2024-11-20 11:30:28.799485] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:07:25.372 11:30:28 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:07:25.630 11:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1526678 00:07:25.630 11:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:25.630 11:30:29 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:26.568 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8469fd2b-7ef6-4fda-9f9c-f02e97cdab94 MY_SNAPSHOT 00:07:26.826 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e2931383-9c38-4fd8-9338-cfcfeb49c21d 00:07:26.826 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8469fd2b-7ef6-4fda-9f9c-f02e97cdab94 30 00:07:27.084 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e2931383-9c38-4fd8-9338-cfcfeb49c21d MY_CLONE 00:07:27.343 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7f900526-c8fe-4f9d-9265-a09f83e60ce0 00:07:27.343 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7f900526-c8fe-4f9d-9265-a09f83e60ce0 00:07:27.601 11:30:30 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1526678 00:07:37.574 Initializing NVMe Controllers 00:07:37.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.574 Controller IO queue size 128, less than required. 00:07:37.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.574 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:37.574 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:37.574 Initialization complete. Launching workers. 00:07:37.574 ======================================================== 00:07:37.574 Latency(us) 00:07:37.574 Device Information : IOPS MiB/s Average min max 00:07:37.574 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16386.90 64.01 7813.97 2389.28 37295.23 00:07:37.574 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16499.30 64.45 7760.02 3852.05 41902.63 00:07:37.574 ======================================================== 00:07:37.574 Total : 32886.20 128.46 7786.90 2389.28 41902.63 00:07:37.574 00:07:37.574 11:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.574 11:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8469fd2b-7ef6-4fda-9f9c-f02e97cdab94 00:07:37.574 11:30:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5e094e2-c942-46a6-b65f-6818399caf65 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:07:37.832 rmmod nvme_rdma 00:07:37.832 rmmod nvme_fabrics 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1526276 ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1526276 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1526276 ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1526276 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1526276 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.832 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1526276' 00:07:37.833 killing process with pid 1526276 00:07:37.833 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1526276 00:07:37.833 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1526276 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:07:38.091 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:07:38.092 00:07:38.092 real 0m22.011s 00:07:38.092 user 1m12.733s 00:07:38.092 sys 0m6.030s 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.092 ************************************ 00:07:38.092 END TEST nvmf_lvol 00:07:38.092 ************************************ 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.092 11:30:41 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.351 ************************************ 00:07:38.351 START TEST nvmf_lvs_grow 00:07:38.351 ************************************ 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:38.351 * Looking for test storage... 00:07:38.351 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.351 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.352 --rc genhtml_branch_coverage=1 00:07:38.352 --rc genhtml_function_coverage=1 00:07:38.352 --rc genhtml_legend=1 00:07:38.352 --rc geninfo_all_blocks=1 00:07:38.352 --rc geninfo_unexecuted_blocks=1 00:07:38.352 00:07:38.352 ' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.352 --rc genhtml_branch_coverage=1 00:07:38.352 --rc genhtml_function_coverage=1 00:07:38.352 --rc genhtml_legend=1 00:07:38.352 --rc geninfo_all_blocks=1 00:07:38.352 --rc geninfo_unexecuted_blocks=1 00:07:38.352 00:07:38.352 ' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.352 --rc genhtml_branch_coverage=1 00:07:38.352 --rc genhtml_function_coverage=1 00:07:38.352 --rc genhtml_legend=1 00:07:38.352 --rc geninfo_all_blocks=1 00:07:38.352 --rc geninfo_unexecuted_blocks=1 00:07:38.352 00:07:38.352 ' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.352 --rc genhtml_branch_coverage=1 00:07:38.352 --rc genhtml_function_coverage=1 00:07:38.352 --rc genhtml_legend=1 00:07:38.352 --rc geninfo_all_blocks=1 00:07:38.352 --rc geninfo_unexecuted_blocks=1 00:07:38.352 00:07:38.352 ' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:38.352 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:07:38.352 11:30:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:44.919 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:44.919 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:44.919 Found net devices under 0000:18:00.0: mlx_0_0 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:44.919 Found net devices under 0000:18:00.1: mlx_0_1 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # get_rdma_if_list 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # rdma_devs=() 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@89 -- # continue 2 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@89 -- # continue 2 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@61 -- # uname 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_cm 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_core 00:07:44.919 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_umad 00:07:44.920 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:07:44.920 11:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe iw_cm 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # key_initiator=target1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:07:44.920 10.0.0.1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:07:44.920 10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:44.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:07:44.920 00:07:44.920 --- 10.0.0.2 ping statistics --- 00:07:44.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.920 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:44.920 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:44.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.021 ms 00:07:44.921 00:07:44.921 --- 10.0.0.2 ping statistics --- 00:07:44.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.921 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1531249 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1531249 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1531249 ']' 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.921 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.921 [2024-11-20 11:30:48.261030] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:44.921 [2024-11-20 11:30:48.261087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.921 [2024-11-20 11:30:48.339968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.921 [2024-11-20 11:30:48.387088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.921 [2024-11-20 11:30:48.387128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.922 [2024-11-20 11:30:48.387138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.922 [2024-11-20 11:30:48.387146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.922 [2024-11-20 11:30:48.387153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.922 [2024-11-20 11:30:48.387617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.181 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:45.440 [2024-11-20 11:30:48.733961] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18ec020/0x18f0510) succeed. 00:07:45.440 [2024-11-20 11:30:48.743070] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18ed4d0/0x1931bb0) succeed. 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.440 ************************************ 00:07:45.440 START TEST lvs_grow_clean 00:07:45.440 ************************************ 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.440 11:30:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.698 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:45.698 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:45.955 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:45.955 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:45.955 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a lvol 150 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=00bb1c05-7155-4347-8305-91064463b947 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.213 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:46.471 [2024-11-20 11:30:49.833879] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:46.471 [2024-11-20 11:30:49.833936] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:46.471 true 00:07:46.471 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:46.471 11:30:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:46.730 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:46.730 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.988 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00bb1c05-7155-4347-8305-91064463b947 00:07:46.988 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:07:47.246 [2024-11-20 11:30:50.620355] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:07:47.246 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1531649 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1531649 /var/tmp/bdevperf.sock 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1531649 ']' 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.505 11:30:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.505 [2024-11-20 11:30:50.859831] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:47.505 [2024-11-20 11:30:50.859885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531649 ] 00:07:47.505 [2024-11-20 11:30:50.937496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.505 [2024-11-20 11:30:50.981634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.764 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.764 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:47.764 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:48.023 Nvme0n1 00:07:48.023 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:48.285 [ 00:07:48.285 { 00:07:48.285 "name": "Nvme0n1", 00:07:48.285 "aliases": [ 00:07:48.285 "00bb1c05-7155-4347-8305-91064463b947" 00:07:48.285 ], 00:07:48.285 "product_name": "NVMe disk", 00:07:48.285 "block_size": 4096, 00:07:48.285 "num_blocks": 38912, 00:07:48.285 "uuid": "00bb1c05-7155-4347-8305-91064463b947", 00:07:48.285 "numa_id": 0, 00:07:48.285 "assigned_rate_limits": { 00:07:48.285 "rw_ios_per_sec": 0, 00:07:48.285 "rw_mbytes_per_sec": 0, 00:07:48.285 "r_mbytes_per_sec": 0, 00:07:48.285 "w_mbytes_per_sec": 0 00:07:48.285 }, 00:07:48.285 "claimed": false, 00:07:48.285 "zoned": false, 00:07:48.285 "supported_io_types": { 00:07:48.285 "read": true, 00:07:48.285 "write": true, 00:07:48.285 "unmap": true, 00:07:48.285 "flush": true, 00:07:48.285 "reset": true, 00:07:48.285 "nvme_admin": true, 00:07:48.285 "nvme_io": true, 00:07:48.285 "nvme_io_md": false, 00:07:48.285 "write_zeroes": true, 00:07:48.285 "zcopy": false, 00:07:48.285 "get_zone_info": false, 00:07:48.285 "zone_management": false, 00:07:48.285 "zone_append": false, 00:07:48.285 "compare": true, 00:07:48.285 "compare_and_write": true, 00:07:48.285 "abort": true, 00:07:48.285 "seek_hole": false, 00:07:48.285 "seek_data": false, 00:07:48.285 "copy": true, 00:07:48.285 "nvme_iov_md": false 00:07:48.285 }, 00:07:48.285 "memory_domains": [ 00:07:48.285 { 00:07:48.285 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:48.285 "dma_device_type": 0 00:07:48.285 } 00:07:48.285 ], 00:07:48.285 "driver_specific": { 00:07:48.285 "nvme": [ 00:07:48.285 { 00:07:48.285 "trid": { 00:07:48.285 "trtype": "RDMA", 00:07:48.285 "adrfam": "IPv4", 00:07:48.285 "traddr": "10.0.0.2", 00:07:48.285 "trsvcid": "4420", 00:07:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:48.285 }, 00:07:48.285 "ctrlr_data": { 00:07:48.285 "cntlid": 1, 00:07:48.285 "vendor_id": "0x8086", 00:07:48.285 "model_number": "SPDK bdev Controller", 00:07:48.285 "serial_number": "SPDK0", 00:07:48.285 "firmware_revision": "25.01", 00:07:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.285 "oacs": { 00:07:48.285 "security": 0, 00:07:48.285 "format": 0, 00:07:48.285 "firmware": 0, 00:07:48.285 "ns_manage": 0 00:07:48.285 }, 00:07:48.285 "multi_ctrlr": true, 00:07:48.285 "ana_reporting": false 00:07:48.285 }, 00:07:48.285 "vs": { 00:07:48.285 "nvme_version": "1.3" 00:07:48.285 }, 00:07:48.285 "ns_data": { 00:07:48.285 "id": 1, 00:07:48.285 "can_share": true 00:07:48.285 } 00:07:48.285 } 00:07:48.285 ], 00:07:48.285 "mp_policy": "active_passive" 00:07:48.285 } 00:07:48.285 } 00:07:48.285 ] 00:07:48.285 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1531668 00:07:48.285 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:48.285 11:30:51 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.285 Running I/O for 10 seconds... 00:07:49.257 Latency(us) 00:07:49.257 [2024-11-20T10:30:52.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.257 Nvme0n1 : 1.00 33600.00 131.25 0.00 0.00 0.00 0.00 0.00 00:07:49.257 [2024-11-20T10:30:52.737Z] =================================================================================================================== 00:07:49.257 [2024-11-20T10:30:52.737Z] Total : 33600.00 131.25 0.00 0.00 0.00 0.00 0.00 00:07:49.257 00:07:50.219 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:50.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.219 Nvme0n1 : 2.00 33872.00 132.31 0.00 0.00 0.00 0.00 0.00 00:07:50.219 [2024-11-20T10:30:53.699Z] =================================================================================================================== 00:07:50.219 [2024-11-20T10:30:53.699Z] Total : 33872.00 132.31 0.00 0.00 0.00 0.00 0.00 00:07:50.219 00:07:50.477 true 00:07:50.477 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:50.477 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:50.736 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:50.736 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:50.736 11:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1531668 00:07:51.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.303 Nvme0n1 : 3.00 34006.33 132.84 0.00 0.00 0.00 0.00 0.00 00:07:51.303 [2024-11-20T10:30:54.783Z] =================================================================================================================== 00:07:51.303 [2024-11-20T10:30:54.783Z] Total : 34006.33 132.84 0.00 0.00 0.00 0.00 0.00 00:07:51.303 00:07:52.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.238 Nvme0n1 : 4.00 34176.00 133.50 0.00 0.00 0.00 0.00 0.00 00:07:52.238 [2024-11-20T10:30:55.718Z] =================================================================================================================== 00:07:52.238 [2024-11-20T10:30:55.718Z] Total : 34176.00 133.50 0.00 0.00 0.00 0.00 0.00 00:07:52.238 00:07:53.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.614 Nvme0n1 : 5.00 34252.60 133.80 0.00 0.00 0.00 0.00 0.00 00:07:53.614 [2024-11-20T10:30:57.094Z] =================================================================================================================== 00:07:53.614 [2024-11-20T10:30:57.094Z] Total : 34252.60 133.80 0.00 0.00 0.00 0.00 0.00 00:07:53.614 00:07:54.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.547 Nvme0n1 : 6.00 34293.17 133.96 0.00 0.00 0.00 0.00 0.00 00:07:54.547 [2024-11-20T10:30:58.027Z] =================================================================================================================== 00:07:54.547 [2024-11-20T10:30:58.027Z] Total : 34293.17 133.96 0.00 0.00 0.00 0.00 0.00 00:07:54.547 00:07:55.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.482 Nvme0n1 : 7.00 34359.43 134.22 0.00 0.00 0.00 0.00 0.00 00:07:55.482 [2024-11-20T10:30:58.962Z] =================================================================================================================== 00:07:55.482 [2024-11-20T10:30:58.962Z] Total : 34359.43 134.22 0.00 0.00 0.00 0.00 0.00 00:07:55.482 00:07:56.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.417 Nvme0n1 : 8.00 34403.50 134.39 0.00 0.00 0.00 0.00 0.00 00:07:56.417 [2024-11-20T10:30:59.897Z] =================================================================================================================== 00:07:56.417 [2024-11-20T10:30:59.897Z] Total : 34403.50 134.39 0.00 0.00 0.00 0.00 0.00 00:07:56.417 00:07:57.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.350 Nvme0n1 : 9.00 34356.78 134.21 0.00 0.00 0.00 0.00 0.00 00:07:57.350 [2024-11-20T10:31:00.831Z] =================================================================================================================== 00:07:57.351 [2024-11-20T10:31:00.831Z] Total : 34356.78 134.21 0.00 0.00 0.00 0.00 0.00 00:07:57.351 00:07:58.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.286 Nvme0n1 : 10.00 34172.20 133.49 0.00 0.00 0.00 0.00 0.00 00:07:58.286 [2024-11-20T10:31:01.766Z] =================================================================================================================== 00:07:58.286 [2024-11-20T10:31:01.766Z] Total : 34172.20 133.49 0.00 0.00 0.00 0.00 0.00 00:07:58.286 00:07:58.286 00:07:58.286 Latency(us) 00:07:58.286 [2024-11-20T10:31:01.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.286 Nvme0n1 : 10.00 34170.82 133.48 0.00 0.00 3743.09 2621.44 16298.52 00:07:58.286 [2024-11-20T10:31:01.766Z] =================================================================================================================== 00:07:58.286 [2024-11-20T10:31:01.766Z] Total : 34170.82 133.48 0.00 0.00 3743.09 2621.44 16298.52 00:07:58.286 { 00:07:58.286 "results": [ 00:07:58.286 { 00:07:58.286 "job": "Nvme0n1", 00:07:58.286 "core_mask": "0x2", 00:07:58.286 "workload": "randwrite", 00:07:58.286 "status": "finished", 00:07:58.286 "queue_depth": 128, 00:07:58.286 "io_size": 4096, 00:07:58.286 "runtime": 10.003505, 00:07:58.286 "iops": 34170.823126494164, 00:07:58.286 "mibps": 133.47977783786783, 00:07:58.286 "io_failed": 0, 00:07:58.286 "io_timeout": 0, 00:07:58.286 "avg_latency_us": 3743.088802046898, 00:07:58.286 "min_latency_us": 2621.44, 00:07:58.286 "max_latency_us": 16298.518260869565 00:07:58.286 } 00:07:58.286 ], 00:07:58.286 "core_count": 1 00:07:58.286 } 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1531649 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1531649 ']' 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1531649 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.286 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1531649 00:07:58.544 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.545 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.545 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1531649' 00:07:58.545 killing process with pid 1531649 00:07:58.545 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1531649 00:07:58.545 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.545 00:07:58.545 Latency(us) 00:07:58.545 [2024-11-20T10:31:02.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.545 [2024-11-20T10:31:02.025Z] =================================================================================================================== 00:07:58.545 [2024-11-20T10:31:02.025Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.545 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1531649 00:07:58.545 11:31:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:07:58.803 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.061 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:59.062 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.320 [2024-11-20 11:31:02.734005] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:59.320 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:07:59.580 request: 00:07:59.580 { 00:07:59.580 "uuid": "5eecf95a-1d3b-45ad-ab12-6da6e7457a2a", 00:07:59.580 "method": "bdev_lvol_get_lvstores", 00:07:59.580 "req_id": 1 00:07:59.580 } 00:07:59.580 Got JSON-RPC error response 00:07:59.580 response: 00:07:59.580 { 00:07:59.580 "code": -19, 00:07:59.580 "message": "No such device" 00:07:59.580 } 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.580 11:31:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.839 aio_bdev 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00bb1c05-7155-4347-8305-91064463b947 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=00bb1c05-7155-4347-8305-91064463b947 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.839 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.098 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00bb1c05-7155-4347-8305-91064463b947 -t 2000 00:08:00.098 [ 00:08:00.098 { 00:08:00.098 "name": "00bb1c05-7155-4347-8305-91064463b947", 00:08:00.098 "aliases": [ 00:08:00.098 "lvs/lvol" 00:08:00.098 ], 00:08:00.098 "product_name": "Logical Volume", 00:08:00.098 "block_size": 4096, 00:08:00.098 "num_blocks": 38912, 00:08:00.098 "uuid": "00bb1c05-7155-4347-8305-91064463b947", 00:08:00.098 "assigned_rate_limits": { 00:08:00.098 "rw_ios_per_sec": 0, 00:08:00.098 "rw_mbytes_per_sec": 0, 00:08:00.098 "r_mbytes_per_sec": 0, 00:08:00.098 "w_mbytes_per_sec": 0 00:08:00.098 }, 00:08:00.098 "claimed": false, 00:08:00.098 "zoned": false, 00:08:00.098 "supported_io_types": { 00:08:00.098 "read": true, 00:08:00.098 "write": true, 00:08:00.098 "unmap": true, 00:08:00.098 "flush": false, 00:08:00.098 "reset": true, 00:08:00.098 "nvme_admin": false, 00:08:00.098 "nvme_io": false, 00:08:00.098 "nvme_io_md": false, 00:08:00.098 "write_zeroes": true, 00:08:00.098 "zcopy": false, 00:08:00.098 "get_zone_info": false, 00:08:00.098 "zone_management": false, 00:08:00.098 "zone_append": false, 00:08:00.098 "compare": false, 00:08:00.098 "compare_and_write": false, 00:08:00.098 "abort": false, 00:08:00.098 "seek_hole": true, 00:08:00.098 "seek_data": true, 00:08:00.098 "copy": false, 00:08:00.098 "nvme_iov_md": false 00:08:00.098 }, 00:08:00.098 "driver_specific": { 00:08:00.098 "lvol": { 00:08:00.098 "lvol_store_uuid": "5eecf95a-1d3b-45ad-ab12-6da6e7457a2a", 00:08:00.098 "base_bdev": "aio_bdev", 00:08:00.098 "thin_provision": false, 00:08:00.098 "num_allocated_clusters": 38, 00:08:00.098 "snapshot": false, 00:08:00.098 "clone": false, 00:08:00.098 "esnap_clone": false 00:08:00.098 } 00:08:00.098 } 00:08:00.098 } 00:08:00.098 ] 00:08:00.098 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:00.098 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:08:00.098 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:00.356 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:00.356 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:08:00.356 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:00.615 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:00.615 11:31:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00bb1c05-7155-4347-8305-91064463b947 00:08:00.874 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5eecf95a-1d3b-45ad-ab12-6da6e7457a2a 00:08:01.133 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.133 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.133 00:08:01.133 real 0m15.754s 00:08:01.133 user 0m15.582s 00:08:01.133 sys 0m1.220s 00:08:01.133 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.133 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:01.133 ************************************ 00:08:01.133 END TEST lvs_grow_clean 00:08:01.133 ************************************ 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.392 ************************************ 00:08:01.392 START TEST lvs_grow_dirty 00:08:01.392 ************************************ 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.392 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.651 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:01.651 11:31:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:01.651 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:01.651 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:01.651 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:01.909 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:01.909 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:01.909 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 lvol 150 00:08:02.168 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:02.168 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.168 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:02.426 [2024-11-20 11:31:05.667733] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:02.426 [2024-11-20 11:31:05.667803] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:02.426 true 00:08:02.426 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:02.426 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:02.426 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:02.426 11:31:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.684 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:02.941 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:08:02.941 [2024-11-20 11:31:06.390073] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:08:02.941 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1533739 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1533739 /var/tmp/bdevperf.sock 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1533739 ']' 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.200 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.200 [2024-11-20 11:31:06.641728] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:03.200 [2024-11-20 11:31:06.641782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533739 ] 00:08:03.458 [2024-11-20 11:31:06.718652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.458 [2024-11-20 11:31:06.762450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.458 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.458 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:03.458 11:31:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:03.716 Nvme0n1 00:08:03.716 11:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:03.975 [ 00:08:03.975 { 00:08:03.975 "name": "Nvme0n1", 00:08:03.975 "aliases": [ 00:08:03.975 "f7f2d961-4f42-4a28-968a-56ff9ecd123a" 00:08:03.975 ], 00:08:03.975 "product_name": "NVMe disk", 00:08:03.975 "block_size": 4096, 00:08:03.975 "num_blocks": 38912, 00:08:03.975 "uuid": "f7f2d961-4f42-4a28-968a-56ff9ecd123a", 00:08:03.975 "numa_id": 0, 00:08:03.975 "assigned_rate_limits": { 00:08:03.975 "rw_ios_per_sec": 0, 00:08:03.975 "rw_mbytes_per_sec": 0, 00:08:03.975 "r_mbytes_per_sec": 0, 00:08:03.975 "w_mbytes_per_sec": 0 00:08:03.975 }, 00:08:03.975 "claimed": false, 00:08:03.975 "zoned": false, 00:08:03.975 "supported_io_types": { 00:08:03.975 "read": true, 00:08:03.975 "write": true, 00:08:03.975 "unmap": true, 00:08:03.975 "flush": true, 00:08:03.975 "reset": true, 00:08:03.975 "nvme_admin": true, 00:08:03.975 "nvme_io": true, 00:08:03.975 "nvme_io_md": false, 00:08:03.975 "write_zeroes": true, 00:08:03.975 "zcopy": false, 00:08:03.975 "get_zone_info": false, 00:08:03.975 "zone_management": false, 00:08:03.975 "zone_append": false, 00:08:03.975 "compare": true, 00:08:03.975 "compare_and_write": true, 00:08:03.975 "abort": true, 00:08:03.975 "seek_hole": false, 00:08:03.975 "seek_data": false, 00:08:03.975 "copy": true, 00:08:03.975 "nvme_iov_md": false 00:08:03.975 }, 00:08:03.975 "memory_domains": [ 00:08:03.975 { 00:08:03.975 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:08:03.975 "dma_device_type": 0 00:08:03.975 } 00:08:03.975 ], 00:08:03.975 "driver_specific": { 00:08:03.975 "nvme": [ 00:08:03.975 { 00:08:03.975 "trid": { 00:08:03.975 "trtype": "RDMA", 00:08:03.975 "adrfam": "IPv4", 00:08:03.975 "traddr": "10.0.0.2", 00:08:03.975 "trsvcid": "4420", 00:08:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:03.975 }, 00:08:03.975 "ctrlr_data": { 00:08:03.975 "cntlid": 1, 00:08:03.975 "vendor_id": "0x8086", 00:08:03.975 "model_number": "SPDK bdev Controller", 00:08:03.975 "serial_number": "SPDK0", 00:08:03.975 "firmware_revision": "25.01", 00:08:03.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:03.975 "oacs": { 00:08:03.975 "security": 0, 00:08:03.975 "format": 0, 00:08:03.975 "firmware": 0, 00:08:03.975 "ns_manage": 0 00:08:03.975 }, 00:08:03.975 "multi_ctrlr": true, 00:08:03.975 "ana_reporting": false 00:08:03.975 }, 00:08:03.975 "vs": { 00:08:03.975 "nvme_version": "1.3" 00:08:03.975 }, 00:08:03.975 "ns_data": { 00:08:03.975 "id": 1, 00:08:03.975 "can_share": true 00:08:03.975 } 00:08:03.975 } 00:08:03.975 ], 00:08:03.975 "mp_policy": "active_passive" 00:08:03.975 } 00:08:03.975 } 00:08:03.975 ] 00:08:03.975 11:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1533910 00:08:03.975 11:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:03.975 11:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.975 Running I/O for 10 seconds... 00:08:05.349 Latency(us) 00:08:05.349 [2024-11-20T10:31:08.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.349 Nvme0n1 : 1.00 33696.00 131.62 0.00 0.00 0.00 0.00 0.00 00:08:05.349 [2024-11-20T10:31:08.829Z] =================================================================================================================== 00:08:05.349 [2024-11-20T10:31:08.829Z] Total : 33696.00 131.62 0.00 0.00 0.00 0.00 0.00 00:08:05.349 00:08:05.917 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:06.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.178 Nvme0n1 : 2.00 34047.50 133.00 0.00 0.00 0.00 0.00 0.00 00:08:06.178 [2024-11-20T10:31:09.658Z] =================================================================================================================== 00:08:06.178 [2024-11-20T10:31:09.658Z] Total : 34047.50 133.00 0.00 0.00 0.00 0.00 0.00 00:08:06.178 00:08:06.178 true 00:08:06.178 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:06.178 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.435 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.435 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.436 11:31:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1533910 00:08:07.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.004 Nvme0n1 : 3.00 33962.67 132.67 0.00 0.00 0.00 0.00 0.00 00:08:07.004 [2024-11-20T10:31:10.484Z] =================================================================================================================== 00:08:07.004 [2024-11-20T10:31:10.484Z] Total : 33962.67 132.67 0.00 0.00 0.00 0.00 0.00 00:08:07.004 00:08:08.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.379 Nvme0n1 : 4.00 34008.00 132.84 0.00 0.00 0.00 0.00 0.00 00:08:08.379 [2024-11-20T10:31:11.859Z] =================================================================================================================== 00:08:08.379 [2024-11-20T10:31:11.859Z] Total : 34008.00 132.84 0.00 0.00 0.00 0.00 0.00 00:08:08.379 00:08:09.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.313 Nvme0n1 : 5.00 34144.40 133.38 0.00 0.00 0.00 0.00 0.00 00:08:09.313 [2024-11-20T10:31:12.793Z] =================================================================================================================== 00:08:09.313 [2024-11-20T10:31:12.793Z] Total : 34144.40 133.38 0.00 0.00 0.00 0.00 0.00 00:08:09.313 00:08:10.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.246 Nvme0n1 : 6.00 34100.83 133.21 0.00 0.00 0.00 0.00 0.00 00:08:10.246 [2024-11-20T10:31:13.726Z] =================================================================================================================== 00:08:10.246 [2024-11-20T10:31:13.726Z] Total : 34100.83 133.21 0.00 0.00 0.00 0.00 0.00 00:08:10.246 00:08:11.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.178 Nvme0n1 : 7.00 34175.43 133.50 0.00 0.00 0.00 0.00 0.00 00:08:11.178 [2024-11-20T10:31:14.658Z] =================================================================================================================== 00:08:11.178 [2024-11-20T10:31:14.658Z] Total : 34175.43 133.50 0.00 0.00 0.00 0.00 0.00 00:08:11.178 00:08:12.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.111 Nvme0n1 : 8.00 34240.12 133.75 0.00 0.00 0.00 0.00 0.00 00:08:12.111 [2024-11-20T10:31:15.591Z] =================================================================================================================== 00:08:12.111 [2024-11-20T10:31:15.591Z] Total : 34240.12 133.75 0.00 0.00 0.00 0.00 0.00 00:08:12.111 00:08:13.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.044 Nvme0n1 : 9.00 34303.78 134.00 0.00 0.00 0.00 0.00 0.00 00:08:13.044 [2024-11-20T10:31:16.524Z] =================================================================================================================== 00:08:13.044 [2024-11-20T10:31:16.524Z] Total : 34303.78 134.00 0.00 0.00 0.00 0.00 0.00 00:08:13.044 00:08:13.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.978 Nvme0n1 : 10.00 34348.50 134.17 0.00 0.00 0.00 0.00 0.00 00:08:13.978 [2024-11-20T10:31:17.458Z] =================================================================================================================== 00:08:13.978 [2024-11-20T10:31:17.458Z] Total : 34348.50 134.17 0.00 0.00 0.00 0.00 0.00 00:08:13.978 00:08:14.236 00:08:14.236 Latency(us) 00:08:14.236 [2024-11-20T10:31:17.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.236 Nvme0n1 : 10.00 34348.83 134.18 0.00 0.00 3723.66 2293.76 14075.99 00:08:14.236 [2024-11-20T10:31:17.716Z] =================================================================================================================== 00:08:14.236 [2024-11-20T10:31:17.716Z] Total : 34348.83 134.18 0.00 0.00 3723.66 2293.76 14075.99 00:08:14.236 { 00:08:14.236 "results": [ 00:08:14.236 { 00:08:14.236 "job": "Nvme0n1", 00:08:14.236 "core_mask": "0x2", 00:08:14.236 "workload": "randwrite", 00:08:14.236 "status": "finished", 00:08:14.236 "queue_depth": 128, 00:08:14.236 "io_size": 4096, 00:08:14.236 "runtime": 10.00363, 00:08:14.236 "iops": 34348.831374211164, 00:08:14.236 "mibps": 134.17512255551236, 00:08:14.236 "io_failed": 0, 00:08:14.236 "io_timeout": 0, 00:08:14.236 "avg_latency_us": 3723.662539487358, 00:08:14.236 "min_latency_us": 2293.76, 00:08:14.236 "max_latency_us": 14075.99304347826 00:08:14.236 } 00:08:14.236 ], 00:08:14.236 "core_count": 1 00:08:14.236 } 00:08:14.236 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1533739 00:08:14.236 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1533739 ']' 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1533739 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533739 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533739' 00:08:14.237 killing process with pid 1533739 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1533739 00:08:14.237 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.237 00:08:14.237 Latency(us) 00:08:14.237 [2024-11-20T10:31:17.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.237 [2024-11-20T10:31:17.717Z] =================================================================================================================== 00:08:14.237 [2024-11-20T10:31:17.717Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.237 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1533739 00:08:14.495 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:08:14.495 11:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.753 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:14.753 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1531249 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1531249 00:08:15.012 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1531249 Killed "${NVMF_APP[@]}" "$@" 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1535383 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1535383 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1535383 ']' 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.012 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.012 [2024-11-20 11:31:18.420875] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:15.012 [2024-11-20 11:31:18.420932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.270 [2024-11-20 11:31:18.504238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.270 [2024-11-20 11:31:18.550817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.270 [2024-11-20 11:31:18.550856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.270 [2024-11-20 11:31:18.550866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.270 [2024-11-20 11:31:18.550890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.270 [2024-11-20 11:31:18.550898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.270 [2024-11-20 11:31:18.551385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.270 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.528 [2024-11-20 11:31:18.863765] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:15.528 [2024-11-20 11:31:18.863848] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:15.528 [2024-11-20 11:31:18.863876] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.528 11:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:15.785 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f7f2d961-4f42-4a28-968a-56ff9ecd123a -t 2000 00:08:15.785 [ 00:08:15.785 { 00:08:15.785 "name": "f7f2d961-4f42-4a28-968a-56ff9ecd123a", 00:08:15.785 "aliases": [ 00:08:15.785 "lvs/lvol" 00:08:15.785 ], 00:08:15.785 "product_name": "Logical Volume", 00:08:15.785 "block_size": 4096, 00:08:15.785 "num_blocks": 38912, 00:08:15.785 "uuid": "f7f2d961-4f42-4a28-968a-56ff9ecd123a", 00:08:15.785 "assigned_rate_limits": { 00:08:15.785 "rw_ios_per_sec": 0, 00:08:15.785 "rw_mbytes_per_sec": 0, 00:08:15.785 "r_mbytes_per_sec": 0, 00:08:15.785 "w_mbytes_per_sec": 0 00:08:15.785 }, 00:08:15.785 "claimed": false, 00:08:15.785 "zoned": false, 00:08:15.785 "supported_io_types": { 00:08:15.785 "read": true, 00:08:15.785 "write": true, 00:08:15.785 "unmap": true, 00:08:15.785 "flush": false, 00:08:15.785 "reset": true, 00:08:15.785 "nvme_admin": false, 00:08:15.785 "nvme_io": false, 00:08:15.785 "nvme_io_md": false, 00:08:15.785 "write_zeroes": true, 00:08:15.785 "zcopy": false, 00:08:15.785 "get_zone_info": false, 00:08:15.785 "zone_management": false, 00:08:15.785 "zone_append": false, 00:08:15.785 "compare": false, 00:08:15.785 "compare_and_write": false, 00:08:15.785 "abort": false, 00:08:15.785 "seek_hole": true, 00:08:15.785 "seek_data": true, 00:08:15.785 "copy": false, 00:08:15.785 "nvme_iov_md": false 00:08:15.785 }, 00:08:15.785 "driver_specific": { 00:08:15.785 "lvol": { 00:08:15.785 "lvol_store_uuid": "e7a52fdf-0c8f-49c9-987d-f8119003c3a2", 00:08:15.785 "base_bdev": "aio_bdev", 00:08:15.785 "thin_provision": false, 00:08:15.785 "num_allocated_clusters": 38, 00:08:15.785 "snapshot": false, 00:08:15.785 "clone": false, 00:08:15.785 "esnap_clone": false 00:08:15.785 } 00:08:15.785 } 00:08:15.785 } 00:08:15.785 ] 00:08:15.785 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:15.785 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:15.785 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:16.041 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:16.041 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:16.041 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:16.299 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:16.299 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:16.557 [2024-11-20 11:31:19.816395] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:16.557 11:31:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:16.814 request: 00:08:16.814 { 00:08:16.814 "uuid": "e7a52fdf-0c8f-49c9-987d-f8119003c3a2", 00:08:16.814 "method": "bdev_lvol_get_lvstores", 00:08:16.814 "req_id": 1 00:08:16.814 } 00:08:16.814 Got JSON-RPC error response 00:08:16.814 response: 00:08:16.814 { 00:08:16.814 "code": -19, 00:08:16.814 "message": "No such device" 00:08:16.814 } 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.814 aio_bdev 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.814 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:16.815 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.815 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.815 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.072 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f7f2d961-4f42-4a28-968a-56ff9ecd123a -t 2000 00:08:17.330 [ 00:08:17.330 { 00:08:17.330 "name": "f7f2d961-4f42-4a28-968a-56ff9ecd123a", 00:08:17.330 "aliases": [ 00:08:17.330 "lvs/lvol" 00:08:17.330 ], 00:08:17.330 "product_name": "Logical Volume", 00:08:17.330 "block_size": 4096, 00:08:17.330 "num_blocks": 38912, 00:08:17.330 "uuid": "f7f2d961-4f42-4a28-968a-56ff9ecd123a", 00:08:17.330 "assigned_rate_limits": { 00:08:17.330 "rw_ios_per_sec": 0, 00:08:17.330 "rw_mbytes_per_sec": 0, 00:08:17.330 "r_mbytes_per_sec": 0, 00:08:17.330 "w_mbytes_per_sec": 0 00:08:17.330 }, 00:08:17.330 "claimed": false, 00:08:17.330 "zoned": false, 00:08:17.330 "supported_io_types": { 00:08:17.330 "read": true, 00:08:17.330 "write": true, 00:08:17.330 "unmap": true, 00:08:17.330 "flush": false, 00:08:17.330 "reset": true, 00:08:17.330 "nvme_admin": false, 00:08:17.330 "nvme_io": false, 00:08:17.330 "nvme_io_md": false, 00:08:17.330 "write_zeroes": true, 00:08:17.330 "zcopy": false, 00:08:17.330 "get_zone_info": false, 00:08:17.330 "zone_management": false, 00:08:17.330 "zone_append": false, 00:08:17.330 "compare": false, 00:08:17.330 "compare_and_write": false, 00:08:17.330 "abort": false, 00:08:17.330 "seek_hole": true, 00:08:17.330 "seek_data": true, 00:08:17.330 "copy": false, 00:08:17.330 "nvme_iov_md": false 00:08:17.330 }, 00:08:17.330 "driver_specific": { 00:08:17.330 "lvol": { 00:08:17.330 "lvol_store_uuid": "e7a52fdf-0c8f-49c9-987d-f8119003c3a2", 00:08:17.330 "base_bdev": "aio_bdev", 00:08:17.330 "thin_provision": false, 00:08:17.330 "num_allocated_clusters": 38, 00:08:17.330 "snapshot": false, 00:08:17.330 "clone": false, 00:08:17.330 "esnap_clone": false 00:08:17.330 } 00:08:17.330 } 00:08:17.330 } 00:08:17.330 ] 00:08:17.330 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:17.330 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:17.330 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:17.588 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:17.588 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:17.588 11:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:17.588 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:17.588 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f7f2d961-4f42-4a28-968a-56ff9ecd123a 00:08:17.846 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7a52fdf-0c8f-49c9-987d-f8119003c3a2 00:08:18.105 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.362 00:08:18.362 real 0m16.956s 00:08:18.362 user 0m44.341s 00:08:18.362 sys 0m3.399s 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.362 ************************************ 00:08:18.362 END TEST lvs_grow_dirty 00:08:18.362 ************************************ 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:18.362 nvmf_trace.0 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:18.362 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:08:18.363 rmmod nvme_rdma 00:08:18.363 rmmod nvme_fabrics 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1535383 ']' 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1535383 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1535383 ']' 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1535383 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.363 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535383 00:08:18.620 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.620 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.620 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535383' 00:08:18.620 killing process with pid 1535383 00:08:18.620 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1535383 00:08:18.620 11:31:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1535383 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:08:18.621 00:08:18.621 real 0m40.466s 00:08:18.621 user 1m5.803s 00:08:18.621 sys 0m9.971s 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.621 ************************************ 00:08:18.621 END TEST nvmf_lvs_grow 00:08:18.621 ************************************ 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.621 11:31:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.880 ************************************ 00:08:18.880 START TEST nvmf_bdev_io_wait 00:08:18.880 ************************************ 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:08:18.880 * Looking for test storage... 00:08:18.880 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.880 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.880 --rc genhtml_branch_coverage=1 00:08:18.881 --rc genhtml_function_coverage=1 00:08:18.881 --rc genhtml_legend=1 00:08:18.881 --rc geninfo_all_blocks=1 00:08:18.881 --rc geninfo_unexecuted_blocks=1 00:08:18.881 00:08:18.881 ' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.881 --rc genhtml_branch_coverage=1 00:08:18.881 --rc genhtml_function_coverage=1 00:08:18.881 --rc genhtml_legend=1 00:08:18.881 --rc geninfo_all_blocks=1 00:08:18.881 --rc geninfo_unexecuted_blocks=1 00:08:18.881 00:08:18.881 ' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.881 --rc genhtml_branch_coverage=1 00:08:18.881 --rc genhtml_function_coverage=1 00:08:18.881 --rc genhtml_legend=1 00:08:18.881 --rc geninfo_all_blocks=1 00:08:18.881 --rc geninfo_unexecuted_blocks=1 00:08:18.881 00:08:18.881 ' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.881 --rc genhtml_branch_coverage=1 00:08:18.881 --rc genhtml_function_coverage=1 00:08:18.881 --rc genhtml_legend=1 00:08:18.881 --rc geninfo_all_blocks=1 00:08:18.881 --rc geninfo_unexecuted_blocks=1 00:08:18.881 00:08:18.881 ' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:18.881 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:08:18.881 11:31:22 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:25.487 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:25.487 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:25.487 Found net devices under 0000:18:00.0: mlx_0_0 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:25.487 Found net devices under 0000:18:00.1: mlx_0_1 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # get_rdma_if_list 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # rdma_devs=() 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@89 -- # continue 2 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@89 -- # continue 2 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@61 -- # uname 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_cm 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_core 00:08:25.487 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_umad 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe iw_cm 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # key_initiator=target1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:25.488 10.0.0.1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:08:25.488 10.0.0.2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:25.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:08:25.488 00:08:25.488 --- 10.0.0.2 ping statistics --- 00:08:25.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.488 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:25.488 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:25.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:08:25.489 00:08:25.489 --- 10.0.0.2 ping statistics --- 00:08:25.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.489 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1538737 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1538737 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1538737 ']' 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.489 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.490 11:31:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:25.490 [2024-11-20 11:31:28.615832] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:25.490 [2024-11-20 11:31:28.615890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.490 [2024-11-20 11:31:28.696142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.490 [2024-11-20 11:31:28.747666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.490 [2024-11-20 11:31:28.747710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.490 [2024-11-20 11:31:28.747719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.490 [2024-11-20 11:31:28.747728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.490 [2024-11-20 11:31:28.747735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.490 [2024-11-20 11:31:28.749194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.490 [2024-11-20 11:31:28.749283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.490 [2024-11-20 11:31:28.749361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.490 [2024-11-20 11:31:28.749363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.057 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.316 [2024-11-20 11:31:29.613303] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10a62b0/0x10aa7a0) succeed. 00:08:26.316 [2024-11-20 11:31:29.622090] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10a7940/0x10ebe40) succeed. 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.316 Malloc0 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.316 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.576 [2024-11-20 11:31:29.822482] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1538940 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1538942 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:26.576 { 00:08:26.576 "params": { 00:08:26.576 "name": "Nvme$subsystem", 00:08:26.576 "trtype": "$TEST_TRANSPORT", 00:08:26.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.576 "adrfam": "ipv4", 00:08:26.576 "trsvcid": "$NVMF_PORT", 00:08:26.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.576 "hdgst": ${hdgst:-false}, 00:08:26.576 "ddgst": ${ddgst:-false} 00:08:26.576 }, 00:08:26.576 "method": "bdev_nvme_attach_controller" 00:08:26.576 } 00:08:26.576 EOF 00:08:26.576 )") 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1538944 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:26.576 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:26.577 { 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme$subsystem", 00:08:26.577 "trtype": "$TEST_TRANSPORT", 00:08:26.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "$NVMF_PORT", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.577 "hdgst": ${hdgst:-false}, 00:08:26.577 "ddgst": ${ddgst:-false} 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 } 00:08:26.577 EOF 00:08:26.577 )") 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1538947 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:26.577 { 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme$subsystem", 00:08:26.577 "trtype": "$TEST_TRANSPORT", 00:08:26.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "$NVMF_PORT", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.577 "hdgst": ${hdgst:-false}, 00:08:26.577 "ddgst": ${ddgst:-false} 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 } 00:08:26.577 EOF 00:08:26.577 )") 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:26.577 { 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme$subsystem", 00:08:26.577 "trtype": "$TEST_TRANSPORT", 00:08:26.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "$NVMF_PORT", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.577 "hdgst": ${hdgst:-false}, 00:08:26.577 "ddgst": ${ddgst:-false} 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 } 00:08:26.577 EOF 00:08:26.577 )") 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1538940 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme1", 00:08:26.577 "trtype": "rdma", 00:08:26.577 "traddr": "10.0.0.2", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "4420", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.577 "hdgst": false, 00:08:26.577 "ddgst": false 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 }' 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme1", 00:08:26.577 "trtype": "rdma", 00:08:26.577 "traddr": "10.0.0.2", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "4420", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.577 "hdgst": false, 00:08:26.577 "ddgst": false 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 }' 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme1", 00:08:26.577 "trtype": "rdma", 00:08:26.577 "traddr": "10.0.0.2", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "4420", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.577 "hdgst": false, 00:08:26.577 "ddgst": false 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 }' 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:26.577 11:31:29 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:26.577 "params": { 00:08:26.577 "name": "Nvme1", 00:08:26.577 "trtype": "rdma", 00:08:26.577 "traddr": "10.0.0.2", 00:08:26.577 "adrfam": "ipv4", 00:08:26.577 "trsvcid": "4420", 00:08:26.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.577 "hdgst": false, 00:08:26.577 "ddgst": false 00:08:26.577 }, 00:08:26.577 "method": "bdev_nvme_attach_controller" 00:08:26.577 }' 00:08:26.577 [2024-11-20 11:31:29.877942] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:26.577 [2024-11-20 11:31:29.877944] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:26.577 [2024-11-20 11:31:29.877947] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:26.577 [2024-11-20 11:31:29.878012] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 11:31:29.878014] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 11:31:29.878013] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:26.577 --proc-type=auto ] 00:08:26.577 --proc-type=auto ] 00:08:26.577 [2024-11-20 11:31:29.884283] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:26.577 [2024-11-20 11:31:29.884339] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:26.836 [2024-11-20 11:31:30.086276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.836 [2024-11-20 11:31:30.133995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.836 [2024-11-20 11:31:30.194997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.836 [2024-11-20 11:31:30.242982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:26.836 [2024-11-20 11:31:30.306155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.094 [2024-11-20 11:31:30.352480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:27.094 [2024-11-20 11:31:30.408140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.094 [2024-11-20 11:31:30.462400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:27.094 Running I/O for 1 seconds... 00:08:27.094 Running I/O for 1 seconds... 00:08:27.094 Running I/O for 1 seconds... 00:08:27.353 Running I/O for 1 seconds... 00:08:28.316 16944.00 IOPS, 66.19 MiB/s 00:08:28.316 Latency(us) 00:08:28.316 [2024-11-20T10:31:31.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.316 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:28.316 Nvme1n1 : 1.01 16983.62 66.34 0.00 0.00 7512.51 4644.51 15500.69 00:08:28.316 [2024-11-20T10:31:31.796Z] =================================================================================================================== 00:08:28.316 [2024-11-20T10:31:31.796Z] Total : 16983.62 66.34 0.00 0.00 7512.51 4644.51 15500.69 00:08:28.316 15479.00 IOPS, 60.46 MiB/s [2024-11-20T10:31:31.796Z] 254952.00 IOPS, 995.91 MiB/s 00:08:28.316 Latency(us) 00:08:28.316 [2024-11-20T10:31:31.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.316 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:28.316 Nvme1n1 : 1.00 254565.68 994.40 0.00 0.00 500.81 223.50 2108.55 00:08:28.316 [2024-11-20T10:31:31.796Z] =================================================================================================================== 00:08:28.316 [2024-11-20T10:31:31.796Z] Total : 254565.68 994.40 0.00 0.00 500.81 223.50 2108.55 00:08:28.316 00:08:28.316 Latency(us) 00:08:28.316 [2024-11-20T10:31:31.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.316 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:28.316 Nvme1n1 : 1.01 15536.61 60.69 0.00 0.00 8213.24 4274.09 18350.08 00:08:28.316 [2024-11-20T10:31:31.796Z] =================================================================================================================== 00:08:28.316 [2024-11-20T10:31:31.796Z] Total : 15536.61 60.69 0.00 0.00 8213.24 4274.09 18350.08 00:08:28.316 17468.00 IOPS, 68.23 MiB/s 00:08:28.316 Latency(us) 00:08:28.316 [2024-11-20T10:31:31.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.316 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:28.316 Nvme1n1 : 1.01 17555.61 68.58 0.00 0.00 7273.74 2934.87 18350.08 00:08:28.316 [2024-11-20T10:31:31.796Z] =================================================================================================================== 00:08:28.316 [2024-11-20T10:31:31.796Z] Total : 17555.61 68.58 0.00 0.00 7273.74 2934.87 18350.08 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1538942 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1538944 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1538947 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:08:28.316 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:08:28.575 rmmod nvme_rdma 00:08:28.575 rmmod nvme_fabrics 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1538737 ']' 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1538737 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1538737 ']' 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1538737 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538737 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538737' 00:08:28.575 killing process with pid 1538737 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1538737 00:08:28.575 11:31:31 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1538737 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:28.834 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:08:28.835 00:08:28.835 real 0m10.072s 00:08:28.835 user 0m20.741s 00:08:28.835 sys 0m6.279s 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.835 ************************************ 00:08:28.835 END TEST nvmf_bdev_io_wait 00:08:28.835 ************************************ 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.835 ************************************ 00:08:28.835 START TEST nvmf_queue_depth 00:08:28.835 ************************************ 00:08:28.835 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:08:29.095 * Looking for test storage... 00:08:29.095 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.095 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.096 --rc genhtml_branch_coverage=1 00:08:29.096 --rc genhtml_function_coverage=1 00:08:29.096 --rc genhtml_legend=1 00:08:29.096 --rc geninfo_all_blocks=1 00:08:29.096 --rc geninfo_unexecuted_blocks=1 00:08:29.096 00:08:29.096 ' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.096 --rc genhtml_branch_coverage=1 00:08:29.096 --rc genhtml_function_coverage=1 00:08:29.096 --rc genhtml_legend=1 00:08:29.096 --rc geninfo_all_blocks=1 00:08:29.096 --rc geninfo_unexecuted_blocks=1 00:08:29.096 00:08:29.096 ' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.096 --rc genhtml_branch_coverage=1 00:08:29.096 --rc genhtml_function_coverage=1 00:08:29.096 --rc genhtml_legend=1 00:08:29.096 --rc geninfo_all_blocks=1 00:08:29.096 --rc geninfo_unexecuted_blocks=1 00:08:29.096 00:08:29.096 ' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.096 --rc genhtml_branch_coverage=1 00:08:29.096 --rc genhtml_function_coverage=1 00:08:29.096 --rc genhtml_legend=1 00:08:29.096 --rc geninfo_all_blocks=1 00:08:29.096 --rc geninfo_unexecuted_blocks=1 00:08:29.096 00:08:29.096 ' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:29.096 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:29.096 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:29.097 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:08:29.097 11:31:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:35.665 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:35.665 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:35.665 Found net devices under 0000:18:00.0: mlx_0_0 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:35.665 Found net devices under 0000:18:00.1: mlx_0_1 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # get_rdma_if_list 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # rdma_devs=() 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@89 -- # continue 2 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@89 -- # continue 2 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@61 -- # uname 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_cm 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_core 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_umad 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:08:35.665 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe iw_cm 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # key_initiator=target1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:08:35.666 10.0.0.1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:08:35.666 10.0.0.2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:35.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:08:35.666 00:08:35.666 --- 10.0.0.2 ping statistics --- 00:08:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.666 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:35.666 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:35.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:08:35.667 00:08:35.667 --- 10.0.0.2 ping statistics --- 00:08:35.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.667 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1542216 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1542216 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1542216 ']' 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 [2024-11-20 11:31:38.674648] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:35.667 [2024-11-20 11:31:38.674716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.667 [2024-11-20 11:31:38.757245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.667 [2024-11-20 11:31:38.801822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.667 [2024-11-20 11:31:38.801870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.667 [2024-11-20 11:31:38.801880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.667 [2024-11-20 11:31:38.801891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.667 [2024-11-20 11:31:38.801898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.667 [2024-11-20 11:31:38.802378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.667 11:31:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 [2024-11-20 11:31:38.971808] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12f12f0/0x12f57e0) succeed. 00:08:35.667 [2024-11-20 11:31:38.981474] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12f27a0/0x1336e80) succeed. 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 Malloc0 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.667 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 [2024-11-20 11:31:39.072169] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1542270 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1542270 /var/tmp/bdevperf.sock 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1542270 ']' 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.668 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.668 [2024-11-20 11:31:39.123948] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:35.668 [2024-11-20 11:31:39.123998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542270 ] 00:08:35.925 [2024-11-20 11:31:39.205277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.925 [2024-11-20 11:31:39.254171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.925 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.925 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:35.925 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:35.925 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.925 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.182 NVMe0n1 00:08:36.182 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.182 11:31:39 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.182 Running I/O for 10 seconds... 00:08:38.494 16384.00 IOPS, 64.00 MiB/s [2024-11-20T10:31:42.910Z] 16896.00 IOPS, 66.00 MiB/s [2024-11-20T10:31:43.846Z] 17202.67 IOPS, 67.20 MiB/s [2024-11-20T10:31:44.781Z] 17355.50 IOPS, 67.79 MiB/s [2024-11-20T10:31:45.716Z] 17372.80 IOPS, 67.86 MiB/s [2024-11-20T10:31:46.653Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-20T10:31:47.588Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-20T10:31:48.963Z] 17408.00 IOPS, 68.00 MiB/s [2024-11-20T10:31:49.909Z] 17424.67 IOPS, 68.07 MiB/s [2024-11-20T10:31:49.909Z] 17470.20 IOPS, 68.24 MiB/s 00:08:46.429 Latency(us) 00:08:46.429 [2024-11-20T10:31:49.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.429 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:46.429 Verification LBA range: start 0x0 length 0x4000 00:08:46.429 NVMe0n1 : 10.03 17484.82 68.30 0.00 0.00 58390.45 6867.03 39663.53 00:08:46.429 [2024-11-20T10:31:49.909Z] =================================================================================================================== 00:08:46.429 [2024-11-20T10:31:49.909Z] Total : 17484.82 68.30 0.00 0.00 58390.45 6867.03 39663.53 00:08:46.429 { 00:08:46.429 "results": [ 00:08:46.429 { 00:08:46.429 "job": "NVMe0n1", 00:08:46.429 "core_mask": "0x1", 00:08:46.429 "workload": "verify", 00:08:46.429 "status": "finished", 00:08:46.429 "verify_range": { 00:08:46.429 "start": 0, 00:08:46.429 "length": 16384 00:08:46.429 }, 00:08:46.429 "queue_depth": 1024, 00:08:46.429 "io_size": 4096, 00:08:46.429 "runtime": 10.031903, 00:08:46.429 "iops": 17484.818184545842, 00:08:46.429 "mibps": 68.3000710333822, 00:08:46.429 "io_failed": 0, 00:08:46.429 "io_timeout": 0, 00:08:46.429 "avg_latency_us": 58390.448565172286, 00:08:46.429 "min_latency_us": 6867.033043478261, 00:08:46.429 "max_latency_us": 39663.52695652174 00:08:46.429 } 00:08:46.429 ], 00:08:46.429 "core_count": 1 00:08:46.429 } 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1542270 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1542270 ']' 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1542270 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1542270 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1542270' 00:08:46.429 killing process with pid 1542270 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1542270 00:08:46.429 Received shutdown signal, test time was about 10.000000 seconds 00:08:46.429 00:08:46.429 Latency(us) 00:08:46.429 [2024-11-20T10:31:49.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.429 [2024-11-20T10:31:49.909Z] =================================================================================================================== 00:08:46.429 [2024-11-20T10:31:49.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1542270 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:46.429 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:08:46.429 rmmod nvme_rdma 00:08:46.688 rmmod nvme_fabrics 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1542216 ']' 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1542216 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1542216 ']' 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1542216 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1542216 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1542216' 00:08:46.688 killing process with pid 1542216 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1542216 00:08:46.688 11:31:49 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1542216 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:08:46.947 00:08:46.947 real 0m17.994s 00:08:46.947 user 0m24.179s 00:08:46.947 sys 0m5.389s 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.947 ************************************ 00:08:46.947 END TEST nvmf_queue_depth 00:08:46.947 ************************************ 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.947 ************************************ 00:08:46.947 START TEST nvmf_nmic 00:08:46.947 ************************************ 00:08:46.947 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:47.207 * Looking for test storage... 00:08:47.207 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.207 --rc genhtml_branch_coverage=1 00:08:47.207 --rc genhtml_function_coverage=1 00:08:47.207 --rc genhtml_legend=1 00:08:47.207 --rc geninfo_all_blocks=1 00:08:47.207 --rc geninfo_unexecuted_blocks=1 00:08:47.207 00:08:47.207 ' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.207 --rc genhtml_branch_coverage=1 00:08:47.207 --rc genhtml_function_coverage=1 00:08:47.207 --rc genhtml_legend=1 00:08:47.207 --rc geninfo_all_blocks=1 00:08:47.207 --rc geninfo_unexecuted_blocks=1 00:08:47.207 00:08:47.207 ' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.207 --rc genhtml_branch_coverage=1 00:08:47.207 --rc genhtml_function_coverage=1 00:08:47.207 --rc genhtml_legend=1 00:08:47.207 --rc geninfo_all_blocks=1 00:08:47.207 --rc geninfo_unexecuted_blocks=1 00:08:47.207 00:08:47.207 ' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.207 --rc genhtml_branch_coverage=1 00:08:47.207 --rc genhtml_function_coverage=1 00:08:47.207 --rc genhtml_legend=1 00:08:47.207 --rc geninfo_all_blocks=1 00:08:47.207 --rc geninfo_unexecuted_blocks=1 00:08:47.207 00:08:47.207 ' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.207 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:47.208 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:08:47.208 11:31:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:53.767 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:53.767 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:53.767 Found net devices under 0000:18:00.0: mlx_0_0 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:53.767 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:53.768 Found net devices under 0000:18:00.1: mlx_0_1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # get_rdma_if_list 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # rdma_devs=() 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@89 -- # continue 2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@89 -- # continue 2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@61 -- # uname 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_cm 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_core 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_umad 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe iw_cm 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # key_initiator=target1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:08:53.768 10.0.0.1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:08:53.768 10.0.0.2 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:08:53.768 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:53.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:08:53.769 00:08:53.769 --- 10.0.0.2 ping statistics --- 00:08:53.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.769 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:53.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:08:53.769 00:08:53.769 --- 10.0.0.2 ping statistics --- 00:08:53.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.769 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:08:53.769 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1546643 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1546643 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1546643 ']' 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.770 11:31:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.770 [2024-11-20 11:31:57.025929] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:53.770 [2024-11-20 11:31:57.025989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.770 [2024-11-20 11:31:57.105276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.770 [2024-11-20 11:31:57.153167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.770 [2024-11-20 11:31:57.153208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.770 [2024-11-20 11:31:57.153218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.770 [2024-11-20 11:31:57.153242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.770 [2024-11-20 11:31:57.153250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.770 [2024-11-20 11:31:57.154632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.770 [2024-11-20 11:31:57.154721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.770 [2024-11-20 11:31:57.154801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.770 [2024-11-20 11:31:57.154803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.028 [2024-11-20 11:31:57.331735] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d6c220/0x1d70710) succeed. 00:08:54.028 [2024-11-20 11:31:57.340778] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d6d8b0/0x1db1db0) succeed. 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.028 Malloc0 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.028 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 [2024-11-20 11:31:57.521057] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:54.286 test case1: single bdev can't be used in multiple subsystems 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 10.0.0.2 -s 4420 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 [2024-11-20 11:31:57.544935] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:54.286 [2024-11-20 11:31:57.544955] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:54.286 [2024-11-20 11:31:57.544965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.286 request: 00:08:54.286 { 00:08:54.286 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:54.286 "namespace": { 00:08:54.286 "bdev_name": "Malloc0", 00:08:54.286 "no_auto_visible": false 00:08:54.286 }, 00:08:54.286 "method": "nvmf_subsystem_add_ns", 00:08:54.286 "req_id": 1 00:08:54.286 } 00:08:54.286 Got JSON-RPC error response 00:08:54.286 response: 00:08:54.286 { 00:08:54.286 "code": -32602, 00:08:54.286 "message": "Invalid parameters" 00:08:54.286 } 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:54.286 Adding namespace failed - expected result. 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:54.286 test case2: host connect to nvmf target in multiple paths 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:54.286 [2024-11-20 11:31:57.556992] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.286 11:31:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:55.218 11:31:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:56.151 11:31:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.151 11:31:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:56.151 11:31:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.151 11:31:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:56.151 11:31:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:58.681 11:32:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:58.681 [global] 00:08:58.681 thread=1 00:08:58.681 invalidate=1 00:08:58.681 rw=write 00:08:58.681 time_based=1 00:08:58.681 runtime=1 00:08:58.681 ioengine=libaio 00:08:58.681 direct=1 00:08:58.681 bs=4096 00:08:58.681 iodepth=1 00:08:58.681 norandommap=0 00:08:58.681 numjobs=1 00:08:58.681 00:08:58.681 verify_dump=1 00:08:58.681 verify_backlog=512 00:08:58.681 verify_state_save=0 00:08:58.681 do_verify=1 00:08:58.681 verify=crc32c-intel 00:08:58.681 [job0] 00:08:58.681 filename=/dev/nvme0n1 00:08:58.681 Could not set queue depth (nvme0n1) 00:08:58.681 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:58.681 fio-3.35 00:08:58.681 Starting 1 thread 00:08:59.617 00:08:59.617 job0: (groupid=0, jobs=1): err= 0: pid=1547387: Wed Nov 20 11:32:03 2024 00:08:59.617 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:08:59.617 slat (nsec): min=8431, max=37866, avg=9047.46, stdev=1163.48 00:08:59.617 clat (usec): min=46, max=451, avg=61.84, stdev= 6.92 00:08:59.617 lat (usec): min=59, max=460, avg=70.88, stdev= 6.99 00:08:59.617 clat percentiles (usec): 00:08:59.617 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 59], 00:08:59.617 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:08:59.617 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 68], 95.00th=[ 70], 00:08:59.617 | 99.00th=[ 74], 99.50th=[ 77], 99.90th=[ 84], 99.95th=[ 87], 00:08:59.617 | 99.99th=[ 453] 00:08:59.617 write: IOPS=6847, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1001msec); 0 zone resets 00:08:59.617 slat (nsec): min=10945, max=46241, avg=11856.12, stdev=1421.83 00:08:59.617 clat (usec): min=35, max=231, avg=59.83, stdev= 5.03 00:08:59.617 lat (usec): min=59, max=243, avg=71.69, stdev= 5.21 00:08:59.617 clat percentiles (usec): 00:08:59.617 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 57], 00:08:59.617 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:08:59.617 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 68], 00:08:59.617 | 99.00th=[ 73], 99.50th=[ 76], 99.90th=[ 81], 99.95th=[ 83], 00:08:59.617 | 99.99th=[ 231] 00:08:59.617 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:08:59.617 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:08:59.617 lat (usec) : 50=0.19%, 100=99.78%, 250=0.02%, 500=0.01% 00:08:59.617 cpu : usr=9.80%, sys=13.30%, ctx=13510, majf=0, minf=1 00:08:59.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.617 issued rwts: total=6656,6854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.617 00:08:59.617 Run status group 0 (all jobs): 00:08:59.617 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:08:59.617 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=26.8MiB (28.1MB), run=1001-1001msec 00:08:59.617 00:08:59.617 Disk stats (read/write): 00:08:59.617 nvme0n1: ios=6052/6144, merge=0/0, ticks=338/331, in_queue=669, util=90.78% 00:08:59.617 11:32:03 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:09:01.522 rmmod nvme_rdma 00:09:01.522 rmmod nvme_fabrics 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1546643 ']' 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1546643 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1546643 ']' 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1546643 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.522 11:32:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1546643 00:09:01.782 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.782 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.782 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1546643' 00:09:01.782 killing process with pid 1546643 00:09:01.782 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1546643 00:09:01.782 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1546643 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:09:02.042 00:09:02.042 real 0m14.995s 00:09:02.042 user 0m36.760s 00:09:02.042 sys 0m5.734s 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.042 ************************************ 00:09:02.042 END TEST nvmf_nmic 00:09:02.042 ************************************ 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.042 ************************************ 00:09:02.042 START TEST nvmf_fio_target 00:09:02.042 ************************************ 00:09:02.042 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:09:02.301 * Looking for test storage... 00:09:02.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.301 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.302 --rc genhtml_branch_coverage=1 00:09:02.302 --rc genhtml_function_coverage=1 00:09:02.302 --rc genhtml_legend=1 00:09:02.302 --rc geninfo_all_blocks=1 00:09:02.302 --rc geninfo_unexecuted_blocks=1 00:09:02.302 00:09:02.302 ' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.302 --rc genhtml_branch_coverage=1 00:09:02.302 --rc genhtml_function_coverage=1 00:09:02.302 --rc genhtml_legend=1 00:09:02.302 --rc geninfo_all_blocks=1 00:09:02.302 --rc geninfo_unexecuted_blocks=1 00:09:02.302 00:09:02.302 ' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.302 --rc genhtml_branch_coverage=1 00:09:02.302 --rc genhtml_function_coverage=1 00:09:02.302 --rc genhtml_legend=1 00:09:02.302 --rc geninfo_all_blocks=1 00:09:02.302 --rc geninfo_unexecuted_blocks=1 00:09:02.302 00:09:02.302 ' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.302 --rc genhtml_branch_coverage=1 00:09:02.302 --rc genhtml_function_coverage=1 00:09:02.302 --rc genhtml_legend=1 00:09:02.302 --rc geninfo_all_blocks=1 00:09:02.302 --rc geninfo_unexecuted_blocks=1 00:09:02.302 00:09:02.302 ' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:02.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:02.302 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:09:02.303 11:32:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:08.919 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:08.919 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:08.919 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:08.920 Found net devices under 0000:18:00.0: mlx_0_0 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:08.920 Found net devices under 0000:18:00.1: mlx_0_1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # get_rdma_if_list 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # rdma_devs=() 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@89 -- # continue 2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@89 -- # continue 2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@61 -- # uname 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_cm 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_core 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_umad 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe iw_cm 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # key_initiator=target1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:09:08.920 10.0.0.1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:09:08.920 10.0.0.2 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:09:08.920 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:08.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:09:08.921 00:09:08.921 --- 10.0.0.2 ping statistics --- 00:09:08.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.921 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:08.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:09:08.921 00:09:08.921 --- 10.0.0.2 ping statistics --- 00:09:08.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.921 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:08.921 11:32:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:08.921 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1550805 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1550805 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1550805 ']' 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.922 [2024-11-20 11:32:12.139270] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:08.922 [2024-11-20 11:32:12.139340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.922 [2024-11-20 11:32:12.216503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.922 [2024-11-20 11:32:12.265772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.922 [2024-11-20 11:32:12.265820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.922 [2024-11-20 11:32:12.265830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.922 [2024-11-20 11:32:12.265839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.922 [2024-11-20 11:32:12.265846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.922 [2024-11-20 11:32:12.267192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.922 [2024-11-20 11:32:12.267213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.922 [2024-11-20 11:32:12.267290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.922 [2024-11-20 11:32:12.267291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.922 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.180 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.180 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:09.180 [2024-11-20 11:32:12.626196] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x899220/0x89d710) succeed. 00:09:09.180 [2024-11-20 11:32:12.635502] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x89a8b0/0x8dedb0) succeed. 00:09:09.437 11:32:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.694 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:09.694 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.951 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:09.951 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.210 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:10.210 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.210 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:10.210 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:10.467 11:32:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.724 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:10.724 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.982 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:10.982 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.240 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:11.240 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:11.497 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.497 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:11.497 11:32:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.754 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:11.754 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:12.011 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:09:12.269 [2024-11-20 11:32:15.491452] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:09:12.269 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:12.269 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:12.527 11:32:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:13.459 11:32:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:15.979 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:15.980 11:32:18 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:15.980 [global] 00:09:15.980 thread=1 00:09:15.980 invalidate=1 00:09:15.980 rw=write 00:09:15.980 time_based=1 00:09:15.980 runtime=1 00:09:15.980 ioengine=libaio 00:09:15.980 direct=1 00:09:15.980 bs=4096 00:09:15.980 iodepth=1 00:09:15.980 norandommap=0 00:09:15.980 numjobs=1 00:09:15.980 00:09:15.980 verify_dump=1 00:09:15.980 verify_backlog=512 00:09:15.980 verify_state_save=0 00:09:15.980 do_verify=1 00:09:15.980 verify=crc32c-intel 00:09:15.980 [job0] 00:09:15.980 filename=/dev/nvme0n1 00:09:15.980 [job1] 00:09:15.980 filename=/dev/nvme0n2 00:09:15.980 [job2] 00:09:15.980 filename=/dev/nvme0n3 00:09:15.980 [job3] 00:09:15.980 filename=/dev/nvme0n4 00:09:15.980 Could not set queue depth (nvme0n1) 00:09:15.980 Could not set queue depth (nvme0n2) 00:09:15.980 Could not set queue depth (nvme0n3) 00:09:15.980 Could not set queue depth (nvme0n4) 00:09:15.980 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.980 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.980 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.980 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.980 fio-3.35 00:09:15.980 Starting 4 threads 00:09:17.357 00:09:17.357 job0: (groupid=0, jobs=1): err= 0: pid=1551900: Wed Nov 20 11:32:20 2024 00:09:17.357 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:09:17.357 slat (nsec): min=8585, max=39683, avg=9537.98, stdev=1266.49 00:09:17.357 clat (usec): min=67, max=283, avg=107.83, stdev=27.26 00:09:17.357 lat (usec): min=76, max=292, avg=117.37, stdev=27.63 00:09:17.357 clat percentiles (usec): 00:09:17.357 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 82], 00:09:17.357 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 109], 60.00th=[ 119], 00:09:17.357 | 70.00th=[ 125], 80.00th=[ 130], 90.00th=[ 145], 95.00th=[ 155], 00:09:17.357 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 221], 99.95th=[ 243], 00:09:17.357 | 99.99th=[ 285] 00:09:17.357 write: IOPS=4480, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1002msec); 0 zone resets 00:09:17.357 slat (nsec): min=9822, max=43730, avg=12207.29, stdev=1451.63 00:09:17.357 clat (usec): min=61, max=470, avg=98.30, stdev=24.60 00:09:17.357 lat (usec): min=77, max=482, avg=110.51, stdev=24.79 00:09:17.357 clat percentiles (usec): 00:09:17.357 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:09:17.357 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 108], 00:09:17.357 | 70.00th=[ 113], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 143], 00:09:17.357 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 202], 99.95th=[ 212], 00:09:17.357 | 99.99th=[ 469] 00:09:17.357 bw ( KiB/s): min=17904, max=18008, per=27.09%, avg=17956.00, stdev=73.54, samples=2 00:09:17.357 iops : min= 4476, max= 4502, avg=4489.00, stdev=18.38, samples=2 00:09:17.357 lat (usec) : 100=51.44%, 250=48.53%, 500=0.03% 00:09:17.357 cpu : usr=6.69%, sys=8.49%, ctx=8586, majf=0, minf=1 00:09:17.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 issued rwts: total=4096,4489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.358 job1: (groupid=0, jobs=1): err= 0: pid=1551901: Wed Nov 20 11:32:20 2024 00:09:17.358 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:17.358 slat (nsec): min=8575, max=40268, avg=9240.57, stdev=1176.51 00:09:17.358 clat (usec): min=57, max=272, avg=97.21, stdev=24.49 00:09:17.358 lat (usec): min=66, max=288, avg=106.45, stdev=24.63 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 80], 00:09:17.358 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 89], 00:09:17.358 | 70.00th=[ 97], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 143], 00:09:17.358 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 188], 99.95th=[ 200], 00:09:17.358 | 99.99th=[ 273] 00:09:17.358 write: IOPS=4679, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec); 0 zone resets 00:09:17.358 slat (nsec): min=9794, max=39033, avg=11944.82, stdev=1236.48 00:09:17.358 clat (usec): min=63, max=209, avg=91.35, stdev=21.30 00:09:17.358 lat (usec): min=75, max=225, avg=103.30, stdev=21.31 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 76], 00:09:17.358 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 85], 00:09:17.358 | 70.00th=[ 92], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 131], 00:09:17.358 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 157], 99.95th=[ 161], 00:09:17.358 | 99.99th=[ 210] 00:09:17.358 bw ( KiB/s): min=16728, max=16728, per=25.24%, avg=16728.00, stdev= 0.00, samples=1 00:09:17.358 iops : min= 4182, max= 4182, avg=4182.00, stdev= 0.00, samples=1 00:09:17.358 lat (usec) : 100=71.63%, 250=28.36%, 500=0.01% 00:09:17.358 cpu : usr=5.90%, sys=10.30%, ctx=9292, majf=0, minf=1 00:09:17.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 issued rwts: total=4608,4684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.358 job2: (groupid=0, jobs=1): err= 0: pid=1551902: Wed Nov 20 11:32:20 2024 00:09:17.358 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:17.358 slat (nsec): min=8736, max=34801, avg=9405.90, stdev=1184.98 00:09:17.358 clat (usec): min=75, max=336, avg=122.10, stdev=21.30 00:09:17.358 lat (usec): min=84, max=346, avg=131.51, stdev=21.29 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 99], 00:09:17.358 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:09:17.358 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 157], 00:09:17.358 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 206], 99.95th=[ 265], 00:09:17.358 | 99.99th=[ 338] 00:09:17.358 write: IOPS=3842, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:09:17.358 slat (nsec): min=10591, max=40583, avg=12036.70, stdev=1501.68 00:09:17.358 clat (usec): min=72, max=483, avg=120.40, stdev=20.81 00:09:17.358 lat (usec): min=84, max=495, avg=132.43, stdev=20.78 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 95], 20.00th=[ 108], 00:09:17.358 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 123], 00:09:17.358 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 153], 00:09:17.358 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 273], 99.95th=[ 289], 00:09:17.358 | 99.99th=[ 482] 00:09:17.358 bw ( KiB/s): min=16384, max=16384, per=24.72%, avg=16384.00, stdev= 0.00, samples=1 00:09:17.358 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:17.358 lat (usec) : 100=16.42%, 250=83.49%, 500=0.09% 00:09:17.358 cpu : usr=5.00%, sys=8.00%, ctx=7430, majf=0, minf=1 00:09:17.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.358 job3: (groupid=0, jobs=1): err= 0: pid=1551903: Wed Nov 20 11:32:20 2024 00:09:17.358 read: IOPS=3425, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec) 00:09:17.358 slat (nsec): min=8825, max=37556, avg=10001.54, stdev=1547.67 00:09:17.358 clat (usec): min=78, max=472, avg=135.93, stdev=17.13 00:09:17.358 lat (usec): min=87, max=482, avg=145.93, stdev=17.40 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 98], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:09:17.358 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:09:17.358 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:09:17.358 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 265], 99.95th=[ 302], 00:09:17.358 | 99.99th=[ 474] 00:09:17.358 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:17.358 slat (nsec): min=10532, max=64375, avg=12544.73, stdev=1884.12 00:09:17.358 clat (usec): min=75, max=301, avg=121.54, stdev=16.77 00:09:17.358 lat (usec): min=87, max=317, avg=134.09, stdev=16.88 00:09:17.358 clat percentiles (usec): 00:09:17.358 | 1.00th=[ 82], 5.00th=[ 89], 10.00th=[ 105], 20.00th=[ 115], 00:09:17.358 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 125], 00:09:17.358 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 143], 00:09:17.358 | 99.00th=[ 169], 99.50th=[ 186], 99.90th=[ 265], 99.95th=[ 297], 00:09:17.358 | 99.99th=[ 302] 00:09:17.358 bw ( KiB/s): min=16384, max=16384, per=24.72%, avg=16384.00, stdev= 0.00, samples=1 00:09:17.358 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:17.358 lat (usec) : 100=5.32%, 250=94.54%, 500=0.14% 00:09:17.358 cpu : usr=4.70%, sys=8.00%, ctx=7014, majf=0, minf=1 00:09:17.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.358 issued rwts: total=3429,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.358 00:09:17.358 Run status group 0 (all jobs): 00:09:17.358 READ: bw=61.3MiB/s (64.2MB/s), 13.4MiB/s-18.0MiB/s (14.0MB/s-18.9MB/s), io=61.4MiB (64.4MB), run=1001-1002msec 00:09:17.358 WRITE: bw=64.7MiB/s (67.9MB/s), 14.0MiB/s-18.3MiB/s (14.7MB/s-19.2MB/s), io=64.9MiB (68.0MB), run=1001-1002msec 00:09:17.358 00:09:17.358 Disk stats (read/write): 00:09:17.358 nvme0n1: ios=3634/3836, merge=0/0, ticks=370/364, in_queue=734, util=86.07% 00:09:17.358 nvme0n2: ios=3708/4096, merge=0/0, ticks=341/345, in_queue=686, util=86.47% 00:09:17.358 nvme0n3: ios=3072/3098, merge=0/0, ticks=378/363, in_queue=741, util=88.91% 00:09:17.358 nvme0n4: ios=2893/3072, merge=0/0, ticks=375/370, in_queue=745, util=89.66% 00:09:17.358 11:32:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:17.358 [global] 00:09:17.358 thread=1 00:09:17.358 invalidate=1 00:09:17.358 rw=randwrite 00:09:17.358 time_based=1 00:09:17.358 runtime=1 00:09:17.358 ioengine=libaio 00:09:17.358 direct=1 00:09:17.358 bs=4096 00:09:17.358 iodepth=1 00:09:17.358 norandommap=0 00:09:17.358 numjobs=1 00:09:17.358 00:09:17.358 verify_dump=1 00:09:17.358 verify_backlog=512 00:09:17.358 verify_state_save=0 00:09:17.358 do_verify=1 00:09:17.358 verify=crc32c-intel 00:09:17.358 [job0] 00:09:17.358 filename=/dev/nvme0n1 00:09:17.358 [job1] 00:09:17.358 filename=/dev/nvme0n2 00:09:17.358 [job2] 00:09:17.358 filename=/dev/nvme0n3 00:09:17.358 [job3] 00:09:17.359 filename=/dev/nvme0n4 00:09:17.359 Could not set queue depth (nvme0n1) 00:09:17.359 Could not set queue depth (nvme0n2) 00:09:17.359 Could not set queue depth (nvme0n3) 00:09:17.359 Could not set queue depth (nvme0n4) 00:09:17.616 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.616 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.616 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.616 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.616 fio-3.35 00:09:17.616 Starting 4 threads 00:09:18.988 00:09:18.988 job0: (groupid=0, jobs=1): err= 0: pid=1552208: Wed Nov 20 11:32:22 2024 00:09:18.988 read: IOPS=5418, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1001msec) 00:09:18.988 slat (nsec): min=8372, max=41548, avg=9027.88, stdev=1164.50 00:09:18.988 clat (usec): min=62, max=169, avg=79.11, stdev= 6.21 00:09:18.988 lat (usec): min=72, max=178, avg=88.13, stdev= 6.42 00:09:18.988 clat percentiles (usec): 00:09:18.988 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 75], 00:09:18.988 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:09:18.988 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 91], 00:09:18.988 | 99.00th=[ 96], 99.50th=[ 98], 99.90th=[ 103], 99.95th=[ 111], 00:09:18.988 | 99.99th=[ 169] 00:09:18.988 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:09:18.988 slat (nsec): min=10736, max=45639, avg=11511.23, stdev=1153.53 00:09:18.988 clat (usec): min=59, max=363, avg=75.76, stdev= 7.13 00:09:18.988 lat (usec): min=71, max=374, avg=87.27, stdev= 7.26 00:09:18.988 clat percentiles (usec): 00:09:18.988 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 71], 00:09:18.988 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 77], 00:09:18.988 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 87], 00:09:18.988 | 99.00th=[ 93], 99.50th=[ 95], 99.90th=[ 99], 99.95th=[ 101], 00:09:18.988 | 99.99th=[ 363] 00:09:18.988 bw ( KiB/s): min=23081, max=23081, per=38.46%, avg=23081.00, stdev= 0.00, samples=1 00:09:18.988 iops : min= 5770, max= 5770, avg=5770.00, stdev= 0.00, samples=1 00:09:18.988 lat (usec) : 100=99.86%, 250=0.13%, 500=0.01% 00:09:18.988 cpu : usr=6.40%, sys=12.50%, ctx=11056, majf=0, minf=1 00:09:18.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.988 issued rwts: total=5424,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.988 job1: (groupid=0, jobs=1): err= 0: pid=1552209: Wed Nov 20 11:32:22 2024 00:09:18.988 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:18.988 slat (nsec): min=8295, max=29575, avg=9254.07, stdev=963.60 00:09:18.988 clat (usec): min=66, max=275, avg=152.09, stdev=37.88 00:09:18.988 lat (usec): min=74, max=285, avg=161.34, stdev=38.08 00:09:18.988 clat percentiles (usec): 00:09:18.988 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 80], 20.00th=[ 126], 00:09:18.988 | 30.00th=[ 149], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:09:18.988 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:09:18.988 | 99.00th=[ 208], 99.50th=[ 225], 99.90th=[ 243], 99.95th=[ 255], 00:09:18.988 | 99.99th=[ 277] 00:09:18.988 write: IOPS=3239, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:09:18.988 slat (nsec): min=10281, max=39578, avg=11389.23, stdev=1235.10 00:09:18.988 clat (usec): min=63, max=249, avg=139.32, stdev=36.68 00:09:18.988 lat (usec): min=75, max=261, avg=150.71, stdev=36.71 00:09:18.988 clat percentiles (usec): 00:09:18.988 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 88], 00:09:18.988 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 153], 60.00th=[ 159], 00:09:18.988 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:09:18.988 | 99.00th=[ 202], 99.50th=[ 219], 99.90th=[ 229], 99.95th=[ 239], 00:09:18.988 | 99.99th=[ 251] 00:09:18.988 bw ( KiB/s): min=12288, max=12288, per=20.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:18.988 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:18.988 lat (usec) : 100=19.49%, 250=80.48%, 500=0.03% 00:09:18.988 cpu : usr=4.30%, sys=6.40%, ctx=6315, majf=0, minf=1 00:09:18.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.988 issued rwts: total=3072,3243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.988 job2: (groupid=0, jobs=1): err= 0: pid=1552210: Wed Nov 20 11:32:22 2024 00:09:18.988 read: IOPS=2753, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:09:18.988 slat (nsec): min=8615, max=22475, avg=9585.88, stdev=928.70 00:09:18.988 clat (usec): min=77, max=252, avg=166.25, stdev=22.02 00:09:18.988 lat (usec): min=87, max=262, avg=175.84, stdev=22.08 00:09:18.988 clat percentiles (usec): 00:09:18.988 | 1.00th=[ 93], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 151], 00:09:18.989 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:09:18.989 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 196], 00:09:18.989 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 249], 99.95th=[ 253], 00:09:18.989 | 99.99th=[ 253] 00:09:18.989 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:18.989 slat (nsec): min=10470, max=47002, avg=11652.53, stdev=1375.84 00:09:18.989 clat (usec): min=72, max=238, avg=151.63, stdev=23.25 00:09:18.989 lat (usec): min=84, max=250, avg=163.28, stdev=23.27 00:09:18.989 clat percentiles (usec): 00:09:18.989 | 1.00th=[ 82], 5.00th=[ 105], 10.00th=[ 128], 20.00th=[ 137], 00:09:18.989 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 155], 60.00th=[ 161], 00:09:18.989 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 182], 00:09:18.989 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 233], 99.95th=[ 239], 00:09:18.989 | 99.99th=[ 239] 00:09:18.989 bw ( KiB/s): min=12288, max=12288, per=20.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:18.989 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:18.989 lat (usec) : 100=3.00%, 250=96.96%, 500=0.03% 00:09:18.989 cpu : usr=3.70%, sys=6.30%, ctx=5828, majf=0, minf=1 00:09:18.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.989 issued rwts: total=2756,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.989 job3: (groupid=0, jobs=1): err= 0: pid=1552214: Wed Nov 20 11:32:22 2024 00:09:18.989 read: IOPS=2600, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:09:18.989 slat (nsec): min=8856, max=31069, avg=9563.96, stdev=1109.50 00:09:18.989 clat (usec): min=91, max=264, avg=170.53, stdev=20.20 00:09:18.989 lat (usec): min=101, max=273, avg=180.10, stdev=20.21 00:09:18.989 clat percentiles (usec): 00:09:18.989 | 1.00th=[ 110], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 157], 00:09:18.989 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:18.989 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:09:18.989 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 245], 99.95th=[ 253], 00:09:18.989 | 99.99th=[ 265] 00:09:18.989 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:18.989 slat (nsec): min=10730, max=40371, avg=11745.14, stdev=1324.42 00:09:18.989 clat (usec): min=76, max=236, avg=156.93, stdev=21.03 00:09:18.989 lat (usec): min=87, max=247, avg=168.67, stdev=21.09 00:09:18.989 clat percentiles (usec): 00:09:18.989 | 1.00th=[ 98], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 139], 00:09:18.989 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:09:18.989 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:09:18.989 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 235], 99.95th=[ 237], 00:09:18.989 | 99.99th=[ 237] 00:09:18.989 bw ( KiB/s): min=12288, max=12288, per=20.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:18.989 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:18.989 lat (usec) : 100=0.78%, 250=99.19%, 500=0.04% 00:09:18.989 cpu : usr=3.00%, sys=6.70%, ctx=5675, majf=0, minf=1 00:09:18.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.989 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.989 00:09:18.989 Run status group 0 (all jobs): 00:09:18.989 READ: bw=54.1MiB/s (56.7MB/s), 10.2MiB/s-21.2MiB/s (10.7MB/s-22.2MB/s), io=54.1MiB (56.8MB), run=1001-1001msec 00:09:18.989 WRITE: bw=58.6MiB/s (61.5MB/s), 12.0MiB/s-22.0MiB/s (12.6MB/s-23.0MB/s), io=58.7MiB (61.5MB), run=1001-1001msec 00:09:18.989 00:09:18.989 Disk stats (read/write): 00:09:18.989 nvme0n1: ios=4658/4763, merge=0/0, ticks=352/330, in_queue=682, util=85.86% 00:09:18.989 nvme0n2: ios=2368/2560, merge=0/0, ticks=378/365, in_queue=743, util=86.46% 00:09:18.989 nvme0n3: ios=2333/2560, merge=0/0, ticks=382/373, in_queue=755, util=88.90% 00:09:18.989 nvme0n4: ios=2183/2560, merge=0/0, ticks=360/389, in_queue=749, util=89.65% 00:09:18.989 11:32:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:18.989 [global] 00:09:18.989 thread=1 00:09:18.989 invalidate=1 00:09:18.989 rw=write 00:09:18.989 time_based=1 00:09:18.989 runtime=1 00:09:18.989 ioengine=libaio 00:09:18.989 direct=1 00:09:18.989 bs=4096 00:09:18.989 iodepth=128 00:09:18.989 norandommap=0 00:09:18.989 numjobs=1 00:09:18.989 00:09:18.989 verify_dump=1 00:09:18.989 verify_backlog=512 00:09:18.989 verify_state_save=0 00:09:18.989 do_verify=1 00:09:18.989 verify=crc32c-intel 00:09:18.989 [job0] 00:09:18.989 filename=/dev/nvme0n1 00:09:18.989 [job1] 00:09:18.989 filename=/dev/nvme0n2 00:09:18.989 [job2] 00:09:18.989 filename=/dev/nvme0n3 00:09:18.989 [job3] 00:09:18.989 filename=/dev/nvme0n4 00:09:18.989 Could not set queue depth (nvme0n1) 00:09:18.989 Could not set queue depth (nvme0n2) 00:09:18.989 Could not set queue depth (nvme0n3) 00:09:18.989 Could not set queue depth (nvme0n4) 00:09:18.989 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.989 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.989 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.989 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.989 fio-3.35 00:09:18.989 Starting 4 threads 00:09:20.367 00:09:20.367 job0: (groupid=0, jobs=1): err= 0: pid=1552570: Wed Nov 20 11:32:23 2024 00:09:20.367 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:09:20.367 slat (nsec): min=1985, max=5793.5k, avg=68440.38, stdev=348852.11 00:09:20.367 clat (usec): min=3172, max=20530, avg=9411.44, stdev=3454.79 00:09:20.367 lat (usec): min=3178, max=20547, avg=9479.88, stdev=3467.93 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 4146], 5.00th=[ 4883], 10.00th=[ 5342], 20.00th=[ 6194], 00:09:20.367 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8848], 60.00th=[ 9765], 00:09:20.367 | 70.00th=[11076], 80.00th=[12780], 90.00th=[14222], 95.00th=[15664], 00:09:20.367 | 99.00th=[17957], 99.50th=[18744], 99.90th=[20055], 99.95th=[20055], 00:09:20.367 | 99.99th=[20579] 00:09:20.367 write: IOPS=6772, BW=26.5MiB/s (27.7MB/s)(26.5MiB/1002msec); 0 zone resets 00:09:20.367 slat (usec): min=2, max=6272, avg=75.27, stdev=360.21 00:09:20.367 clat (usec): min=458, max=21726, avg=9422.47, stdev=3866.80 00:09:20.367 lat (usec): min=1281, max=22080, avg=9497.73, stdev=3888.73 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 3425], 5.00th=[ 4817], 10.00th=[ 5211], 20.00th=[ 5866], 00:09:20.367 | 30.00th=[ 6390], 40.00th=[ 6915], 50.00th=[ 8455], 60.00th=[10421], 00:09:20.367 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14877], 95.00th=[16450], 00:09:20.367 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20841], 99.95th=[21103], 00:09:20.367 | 99.99th=[21627] 00:09:20.367 bw ( KiB/s): min=24576, max=24576, per=25.68%, avg=24576.00, stdev= 0.00, samples=1 00:09:20.367 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:20.367 lat (usec) : 500=0.01% 00:09:20.367 lat (msec) : 2=0.24%, 4=1.06%, 10=57.95%, 20=40.68%, 50=0.07% 00:09:20.367 cpu : usr=4.20%, sys=6.69%, ctx=1377, majf=0, minf=1 00:09:20.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:20.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.367 issued rwts: total=6656,6786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.367 job1: (groupid=0, jobs=1): err= 0: pid=1552585: Wed Nov 20 11:32:23 2024 00:09:20.367 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:09:20.367 slat (nsec): min=1995, max=5845.3k, avg=82090.00, stdev=402959.68 00:09:20.367 clat (usec): min=3007, max=22463, avg=11009.57, stdev=4016.19 00:09:20.367 lat (usec): min=3804, max=22467, avg=11091.66, stdev=4035.13 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7308], 00:09:20.367 | 30.00th=[ 8094], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11863], 00:09:20.367 | 70.00th=[13304], 80.00th=[14746], 90.00th=[17171], 95.00th=[18220], 00:09:20.367 | 99.00th=[20841], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:09:20.367 | 99.99th=[22414] 00:09:20.367 write: IOPS=6453, BW=25.2MiB/s (26.4MB/s)(25.2MiB/1001msec); 0 zone resets 00:09:20.367 slat (usec): min=2, max=6610, avg=72.00, stdev=329.38 00:09:20.367 clat (usec): min=382, max=21044, avg=9150.41, stdev=3500.54 00:09:20.367 lat (usec): min=1003, max=21056, avg=9222.41, stdev=3517.22 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5276], 20.00th=[ 6194], 00:09:20.367 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 8586], 60.00th=[ 9503], 00:09:20.367 | 70.00th=[10945], 80.00th=[11994], 90.00th=[13960], 95.00th=[15270], 00:09:20.367 | 99.00th=[19268], 99.50th=[20317], 99.90th=[21103], 99.95th=[21103], 00:09:20.367 | 99.99th=[21103] 00:09:20.367 bw ( KiB/s): min=24576, max=24576, per=25.68%, avg=24576.00, stdev= 0.00, samples=1 00:09:20.367 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:20.367 lat (usec) : 500=0.01% 00:09:20.367 lat (msec) : 2=0.10%, 4=1.02%, 10=55.19%, 20=42.62%, 50=1.06% 00:09:20.367 cpu : usr=3.70%, sys=6.60%, ctx=1367, majf=0, minf=2 00:09:20.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:20.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.367 issued rwts: total=6144,6460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.367 job2: (groupid=0, jobs=1): err= 0: pid=1552602: Wed Nov 20 11:32:23 2024 00:09:20.367 read: IOPS=5462, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1003msec) 00:09:20.367 slat (usec): min=2, max=6614, avg=94.64, stdev=458.13 00:09:20.367 clat (usec): min=511, max=22890, avg=12296.00, stdev=4349.99 00:09:20.367 lat (usec): min=3480, max=23152, avg=12390.64, stdev=4372.83 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6783], 20.00th=[ 8356], 00:09:20.367 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[11994], 60.00th=[13173], 00:09:20.367 | 70.00th=[14353], 80.00th=[16319], 90.00th=[18744], 95.00th=[20317], 00:09:20.367 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22676], 99.95th=[22676], 00:09:20.367 | 99.99th=[22938] 00:09:20.367 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:20.367 slat (usec): min=2, max=4844, avg=81.02, stdev=368.45 00:09:20.367 clat (usec): min=3170, max=24185, avg=10529.60, stdev=3497.84 00:09:20.367 lat (usec): min=3238, max=25315, avg=10610.62, stdev=3510.40 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 4555], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7570], 00:09:20.367 | 30.00th=[ 8225], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[11076], 00:09:20.367 | 70.00th=[11863], 80.00th=[13304], 90.00th=[15270], 95.00th=[17695], 00:09:20.367 | 99.00th=[20579], 99.50th=[21627], 99.90th=[24249], 99.95th=[24249], 00:09:20.367 | 99.99th=[24249] 00:09:20.367 bw ( KiB/s): min=20480, max=24576, per=23.54%, avg=22528.00, stdev=2896.31, samples=2 00:09:20.367 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:20.367 lat (usec) : 750=0.01% 00:09:20.367 lat (msec) : 4=0.44%, 10=41.52%, 20=54.69%, 50=3.34% 00:09:20.367 cpu : usr=3.29%, sys=5.69%, ctx=1191, majf=0, minf=1 00:09:20.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:20.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.367 issued rwts: total=5479,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.367 job3: (groupid=0, jobs=1): err= 0: pid=1552609: Wed Nov 20 11:32:23 2024 00:09:20.367 read: IOPS=4807, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1003msec) 00:09:20.367 slat (usec): min=2, max=6222, avg=100.29, stdev=452.70 00:09:20.367 clat (usec): min=2138, max=26148, avg=12799.42, stdev=4869.71 00:09:20.367 lat (usec): min=2157, max=27643, avg=12899.70, stdev=4889.96 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 4817], 5.00th=[ 6783], 10.00th=[ 7701], 20.00th=[ 8356], 00:09:20.367 | 30.00th=[ 9110], 40.00th=[10552], 50.00th=[11863], 60.00th=[13173], 00:09:20.367 | 70.00th=[15008], 80.00th=[17433], 90.00th=[20055], 95.00th=[21365], 00:09:20.367 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:09:20.367 | 99.99th=[26084] 00:09:20.367 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:20.367 slat (usec): min=2, max=5395, avg=96.12, stdev=433.75 00:09:20.367 clat (usec): min=5198, max=24232, avg=12717.71, stdev=4002.36 00:09:20.367 lat (usec): min=5202, max=24243, avg=12813.82, stdev=4026.19 00:09:20.367 clat percentiles (usec): 00:09:20.367 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 7832], 20.00th=[ 8455], 00:09:20.367 | 30.00th=[ 9896], 40.00th=[11469], 50.00th=[12518], 60.00th=[13304], 00:09:20.367 | 70.00th=[14615], 80.00th=[16712], 90.00th=[18482], 95.00th=[19530], 00:09:20.367 | 99.00th=[22152], 99.50th=[23200], 99.90th=[23725], 99.95th=[23725], 00:09:20.367 | 99.99th=[24249] 00:09:20.367 bw ( KiB/s): min=20480, max=20480, per=21.40%, avg=20480.00, stdev= 0.00, samples=2 00:09:20.367 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:20.367 lat (msec) : 4=0.18%, 10=33.41%, 20=59.82%, 50=6.59% 00:09:20.367 cpu : usr=3.39%, sys=4.89%, ctx=1288, majf=0, minf=1 00:09:20.367 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:20.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.367 issued rwts: total=4822,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.367 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.367 00:09:20.367 Run status group 0 (all jobs): 00:09:20.367 READ: bw=90.0MiB/s (94.3MB/s), 18.8MiB/s-25.9MiB/s (19.7MB/s-27.2MB/s), io=90.2MiB (94.6MB), run=1001-1003msec 00:09:20.367 WRITE: bw=93.5MiB/s (98.0MB/s), 19.9MiB/s-26.5MiB/s (20.9MB/s-27.7MB/s), io=93.7MiB (98.3MB), run=1001-1003msec 00:09:20.367 00:09:20.367 Disk stats (read/write): 00:09:20.367 nvme0n1: ios=5170/5483, merge=0/0, ticks=15516/17320, in_queue=32836, util=83.37% 00:09:20.367 nvme0n2: ios=5137/5632, merge=0/0, ticks=17108/15775, in_queue=32883, util=84.11% 00:09:20.367 nvme0n3: ios=4585/4608, merge=0/0, ticks=18084/14922, in_queue=33006, util=87.37% 00:09:20.367 nvme0n4: ios=4162/4608, merge=0/0, ticks=16301/17351, in_queue=33652, util=88.62% 00:09:20.367 11:32:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:20.367 [global] 00:09:20.367 thread=1 00:09:20.367 invalidate=1 00:09:20.367 rw=randwrite 00:09:20.367 time_based=1 00:09:20.367 runtime=1 00:09:20.367 ioengine=libaio 00:09:20.367 direct=1 00:09:20.367 bs=4096 00:09:20.367 iodepth=128 00:09:20.367 norandommap=0 00:09:20.367 numjobs=1 00:09:20.367 00:09:20.367 verify_dump=1 00:09:20.367 verify_backlog=512 00:09:20.367 verify_state_save=0 00:09:20.367 do_verify=1 00:09:20.367 verify=crc32c-intel 00:09:20.367 [job0] 00:09:20.367 filename=/dev/nvme0n1 00:09:20.367 [job1] 00:09:20.367 filename=/dev/nvme0n2 00:09:20.367 [job2] 00:09:20.367 filename=/dev/nvme0n3 00:09:20.367 [job3] 00:09:20.367 filename=/dev/nvme0n4 00:09:20.368 Could not set queue depth (nvme0n1) 00:09:20.368 Could not set queue depth (nvme0n2) 00:09:20.368 Could not set queue depth (nvme0n3) 00:09:20.368 Could not set queue depth (nvme0n4) 00:09:20.626 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.626 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.626 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.626 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.626 fio-3.35 00:09:20.626 Starting 4 threads 00:09:22.002 00:09:22.002 job0: (groupid=0, jobs=1): err= 0: pid=1552977: Wed Nov 20 11:32:25 2024 00:09:22.002 read: IOPS=5000, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:09:22.002 slat (usec): min=2, max=6350, avg=97.97, stdev=488.76 00:09:22.002 clat (usec): min=1730, max=28076, avg=12823.38, stdev=3539.14 00:09:22.002 lat (usec): min=3824, max=28082, avg=12921.35, stdev=3546.19 00:09:22.002 clat percentiles (usec): 00:09:22.002 | 1.00th=[ 6063], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9765], 00:09:22.002 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12387], 60.00th=[13304], 00:09:22.002 | 70.00th=[14353], 80.00th=[15795], 90.00th=[17695], 95.00th=[18482], 00:09:22.002 | 99.00th=[22152], 99.50th=[27395], 99.90th=[28181], 99.95th=[28181], 00:09:22.002 | 99.99th=[28181] 00:09:22.002 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:22.002 slat (usec): min=2, max=7698, avg=94.31, stdev=452.55 00:09:22.002 clat (usec): min=3595, max=28737, avg=12233.07, stdev=4755.01 00:09:22.002 lat (usec): min=3598, max=28741, avg=12327.39, stdev=4784.91 00:09:22.002 clat percentiles (usec): 00:09:22.002 | 1.00th=[ 4228], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7898], 00:09:22.002 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11469], 60.00th=[12911], 00:09:22.002 | 70.00th=[14615], 80.00th=[16909], 90.00th=[18482], 95.00th=[20579], 00:09:22.002 | 99.00th=[24773], 99.50th=[26608], 99.90th=[28443], 99.95th=[28705], 00:09:22.002 | 99.99th=[28705] 00:09:22.002 bw ( KiB/s): min=20480, max=20480, per=21.11%, avg=20480.00, stdev= 0.00, samples=2 00:09:22.002 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:22.002 lat (msec) : 2=0.01%, 4=0.37%, 10=29.44%, 20=65.85%, 50=4.32% 00:09:22.002 cpu : usr=2.89%, sys=5.39%, ctx=848, majf=0, minf=1 00:09:22.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:22.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.002 issued rwts: total=5015,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.002 job1: (groupid=0, jobs=1): err= 0: pid=1552978: Wed Nov 20 11:32:25 2024 00:09:22.002 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:09:22.002 slat (usec): min=2, max=4508, avg=74.72, stdev=356.01 00:09:22.002 clat (usec): min=2909, max=25269, avg=9915.34, stdev=3759.89 00:09:22.002 lat (usec): min=3387, max=25275, avg=9990.06, stdev=3780.56 00:09:22.002 clat percentiles (usec): 00:09:22.002 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6783], 00:09:22.002 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8848], 60.00th=[10290], 00:09:22.002 | 70.00th=[11469], 80.00th=[12911], 90.00th=[14877], 95.00th=[17957], 00:09:22.002 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22414], 99.95th=[22938], 00:09:22.002 | 99.99th=[25297] 00:09:22.002 write: IOPS=6933, BW=27.1MiB/s (28.4MB/s)(27.1MiB/1002msec); 0 zone resets 00:09:22.002 slat (usec): min=2, max=5088, avg=67.92, stdev=311.77 00:09:22.002 clat (usec): min=736, max=20203, avg=8752.55, stdev=3416.69 00:09:22.002 lat (usec): min=1204, max=20213, avg=8820.48, stdev=3433.10 00:09:22.002 clat percentiles (usec): 00:09:22.002 | 1.00th=[ 3818], 5.00th=[ 4817], 10.00th=[ 5211], 20.00th=[ 5932], 00:09:22.002 | 30.00th=[ 6456], 40.00th=[ 7177], 50.00th=[ 7898], 60.00th=[ 8848], 00:09:22.002 | 70.00th=[10028], 80.00th=[11469], 90.00th=[13698], 95.00th=[16909], 00:09:22.002 | 99.00th=[17433], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:09:22.002 | 99.99th=[20317] 00:09:22.002 bw ( KiB/s): min=24320, max=30240, per=28.13%, avg=27280.00, stdev=4186.07, samples=2 00:09:22.002 iops : min= 6080, max= 7560, avg=6820.00, stdev=1046.52, samples=2 00:09:22.002 lat (usec) : 750=0.01% 00:09:22.002 lat (msec) : 2=0.15%, 4=0.63%, 10=63.56%, 20=34.73%, 50=0.92% 00:09:22.002 cpu : usr=4.00%, sys=6.09%, ctx=1251, majf=0, minf=2 00:09:22.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:22.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.003 issued rwts: total=6656,6947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.003 job2: (groupid=0, jobs=1): err= 0: pid=1552979: Wed Nov 20 11:32:25 2024 00:09:22.003 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:09:22.003 slat (usec): min=2, max=5773, avg=73.63, stdev=349.84 00:09:22.003 clat (usec): min=3825, max=22450, avg=9564.71, stdev=2863.35 00:09:22.003 lat (usec): min=3827, max=22453, avg=9638.34, stdev=2875.30 00:09:22.003 clat percentiles (usec): 00:09:22.003 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7504], 00:09:22.003 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:09:22.003 | 70.00th=[10159], 80.00th=[11469], 90.00th=[13698], 95.00th=[15008], 00:09:22.003 | 99.00th=[18744], 99.50th=[20055], 99.90th=[22414], 99.95th=[22414], 00:09:22.003 | 99.99th=[22414] 00:09:22.003 write: IOPS=6615, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1001msec); 0 zone resets 00:09:22.003 slat (usec): min=2, max=6645, avg=78.17, stdev=369.40 00:09:22.003 clat (usec): min=523, max=22716, avg=10254.96, stdev=3570.94 00:09:22.003 lat (usec): min=530, max=23142, avg=10333.13, stdev=3590.46 00:09:22.003 clat percentiles (usec): 00:09:22.003 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7439], 00:09:22.003 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[10028], 00:09:22.003 | 70.00th=[11731], 80.00th=[13960], 90.00th=[16450], 95.00th=[17433], 00:09:22.003 | 99.00th=[17957], 99.50th=[18744], 99.90th=[18744], 99.95th=[22676], 00:09:22.003 | 99.99th=[22676] 00:09:22.003 bw ( KiB/s): min=24576, max=24576, per=25.34%, avg=24576.00, stdev= 0.00, samples=1 00:09:22.003 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:09:22.003 lat (usec) : 750=0.02% 00:09:22.003 lat (msec) : 2=0.11%, 4=0.44%, 10=63.57%, 20=35.56%, 50=0.31% 00:09:22.003 cpu : usr=4.00%, sys=5.60%, ctx=1224, majf=0, minf=1 00:09:22.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:22.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.003 issued rwts: total=6144,6622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.003 job3: (groupid=0, jobs=1): err= 0: pid=1552980: Wed Nov 20 11:32:25 2024 00:09:22.003 read: IOPS=5536, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:09:22.003 slat (usec): min=2, max=5651, avg=88.23, stdev=424.22 00:09:22.003 clat (usec): min=286, max=21337, avg=11495.73, stdev=3586.46 00:09:22.003 lat (usec): min=2640, max=21340, avg=11583.96, stdev=3597.54 00:09:22.003 clat percentiles (usec): 00:09:22.003 | 1.00th=[ 4686], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7177], 00:09:22.003 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11863], 60.00th=[12911], 00:09:22.003 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16057], 95.00th=[16909], 00:09:22.003 | 99.00th=[18482], 99.50th=[18744], 99.90th=[21365], 99.95th=[21365], 00:09:22.003 | 99.99th=[21365] 00:09:22.003 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:22.003 slat (usec): min=2, max=5602, avg=86.07, stdev=398.59 00:09:22.003 clat (usec): min=2906, max=22591, avg=11145.18, stdev=3958.52 00:09:22.003 lat (usec): min=3269, max=23635, avg=11231.25, stdev=3978.69 00:09:22.003 clat percentiles (usec): 00:09:22.003 | 1.00th=[ 4883], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 6915], 00:09:22.003 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[12387], 00:09:22.003 | 70.00th=[13566], 80.00th=[14615], 90.00th=[16581], 95.00th=[17957], 00:09:22.003 | 99.00th=[20055], 99.50th=[20841], 99.90th=[22676], 99.95th=[22676], 00:09:22.003 | 99.99th=[22676] 00:09:22.003 bw ( KiB/s): min=20368, max=24688, per=23.23%, avg=22528.00, stdev=3054.70, samples=2 00:09:22.003 iops : min= 5092, max= 6172, avg=5632.00, stdev=763.68, samples=2 00:09:22.003 lat (usec) : 500=0.01% 00:09:22.003 lat (msec) : 4=0.42%, 10=38.46%, 20=60.43%, 50=0.68% 00:09:22.003 cpu : usr=3.29%, sys=5.39%, ctx=1177, majf=0, minf=1 00:09:22.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:22.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.003 issued rwts: total=5553,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.003 00:09:22.003 Run status group 0 (all jobs): 00:09:22.003 READ: bw=91.0MiB/s (95.4MB/s), 19.5MiB/s-25.9MiB/s (20.5MB/s-27.2MB/s), io=91.3MiB (95.7MB), run=1001-1003msec 00:09:22.003 WRITE: bw=94.7MiB/s (99.3MB/s), 19.9MiB/s-27.1MiB/s (20.9MB/s-28.4MB/s), io=95.0MiB (99.6MB), run=1001-1003msec 00:09:22.003 00:09:22.003 Disk stats (read/write): 00:09:22.003 nvme0n1: ios=4269/4608, merge=0/0, ticks=15008/15378, in_queue=30386, util=84.97% 00:09:22.003 nvme0n2: ios=5632/5683, merge=0/0, ticks=15660/14907, in_queue=30567, util=85.86% 00:09:22.003 nvme0n3: ios=5120/5329, merge=0/0, ticks=14662/15575, in_queue=30237, util=88.40% 00:09:22.003 nvme0n4: ios=4765/5120, merge=0/0, ticks=15020/15483, in_queue=30503, util=89.15% 00:09:22.003 11:32:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:22.003 11:32:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1553164 00:09:22.003 11:32:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:22.003 11:32:25 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:22.003 [global] 00:09:22.003 thread=1 00:09:22.003 invalidate=1 00:09:22.003 rw=read 00:09:22.003 time_based=1 00:09:22.003 runtime=10 00:09:22.003 ioengine=libaio 00:09:22.003 direct=1 00:09:22.003 bs=4096 00:09:22.003 iodepth=1 00:09:22.003 norandommap=1 00:09:22.003 numjobs=1 00:09:22.003 00:09:22.003 [job0] 00:09:22.003 filename=/dev/nvme0n1 00:09:22.003 [job1] 00:09:22.003 filename=/dev/nvme0n2 00:09:22.003 [job2] 00:09:22.003 filename=/dev/nvme0n3 00:09:22.003 [job3] 00:09:22.003 filename=/dev/nvme0n4 00:09:22.003 Could not set queue depth (nvme0n1) 00:09:22.003 Could not set queue depth (nvme0n2) 00:09:22.003 Could not set queue depth (nvme0n3) 00:09:22.003 Could not set queue depth (nvme0n4) 00:09:22.262 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.262 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.262 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.262 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:22.262 fio-3.35 00:09:22.262 Starting 4 threads 00:09:24.792 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:25.050 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=103690240, buflen=4096 00:09:25.050 fio: pid=1553280, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.050 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:25.308 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=76951552, buflen=4096 00:09:25.308 fio: pid=1553279, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.308 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:25.308 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:25.566 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=20881408, buflen=4096 00:09:25.566 fio: pid=1553277, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.566 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:25.566 11:32:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:25.828 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=32755712, buflen=4096 00:09:25.829 fio: pid=1553278, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.829 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:25.829 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:25.829 00:09:25.829 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1553277: Wed Nov 20 11:32:29 2024 00:09:25.829 read: IOPS=6883, BW=26.9MiB/s (28.2MB/s)(83.9MiB/3121msec) 00:09:25.829 slat (usec): min=8, max=31181, avg=13.31, stdev=323.43 00:09:25.829 clat (usec): min=50, max=479, avg=130.12, stdev=36.77 00:09:25.829 lat (usec): min=59, max=31284, avg=143.43, stdev=325.20 00:09:25.829 clat percentiles (usec): 00:09:25.829 | 1.00th=[ 60], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 87], 00:09:25.829 | 30.00th=[ 96], 40.00th=[ 130], 50.00th=[ 141], 60.00th=[ 147], 00:09:25.829 | 70.00th=[ 153], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 180], 00:09:25.829 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 233], 99.95th=[ 237], 00:09:25.829 | 99.99th=[ 249] 00:09:25.829 bw ( KiB/s): min=23040, max=32026, per=25.55%, avg=27447.00, stdev=3394.60, samples=6 00:09:25.829 iops : min= 5760, max= 8006, avg=6861.67, stdev=848.52, samples=6 00:09:25.829 lat (usec) : 100=32.06%, 250=67.92%, 500=0.01% 00:09:25.829 cpu : usr=2.53%, sys=7.60%, ctx=21487, majf=0, minf=1 00:09:25.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.829 issued rwts: total=21483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.829 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1553278: Wed Nov 20 11:32:29 2024 00:09:25.829 read: IOPS=7278, BW=28.4MiB/s (29.8MB/s)(95.2MiB/3350msec) 00:09:25.829 slat (usec): min=8, max=15946, avg=12.66, stdev=223.85 00:09:25.829 clat (usec): min=38, max=492, avg=122.94, stdev=42.20 00:09:25.829 lat (usec): min=58, max=16018, avg=135.60, stdev=227.33 00:09:25.829 clat percentiles (usec): 00:09:25.829 | 1.00th=[ 54], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 76], 00:09:25.829 | 30.00th=[ 87], 40.00th=[ 117], 50.00th=[ 139], 60.00th=[ 145], 00:09:25.829 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:09:25.830 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 235], 99.95th=[ 241], 00:09:25.830 | 99.99th=[ 371] 00:09:25.830 bw ( KiB/s): min=22888, max=35777, per=25.44%, avg=27325.50, stdev=4809.92, samples=6 00:09:25.830 iops : min= 5722, max= 8944, avg=6831.33, stdev=1202.39, samples=6 00:09:25.830 lat (usec) : 50=0.03%, 100=35.20%, 250=64.74%, 500=0.02% 00:09:25.830 cpu : usr=2.72%, sys=7.94%, ctx=24388, majf=0, minf=2 00:09:25.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.830 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.830 issued rwts: total=24382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.830 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1553279: Wed Nov 20 11:32:29 2024 00:09:25.830 read: IOPS=6403, BW=25.0MiB/s (26.2MB/s)(73.4MiB/2934msec) 00:09:25.830 slat (usec): min=8, max=15872, avg=11.18, stdev=163.34 00:09:25.830 clat (usec): min=67, max=497, avg=142.14, stdev=30.64 00:09:25.830 lat (usec): min=76, max=16002, avg=153.32, stdev=166.05 00:09:25.830 clat percentiles (usec): 00:09:25.830 | 1.00th=[ 81], 5.00th=[ 90], 10.00th=[ 96], 20.00th=[ 109], 00:09:25.830 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:09:25.830 | 70.00th=[ 157], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 188], 00:09:25.830 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 239], 99.95th=[ 241], 00:09:25.830 | 99.99th=[ 375] 00:09:25.830 bw ( KiB/s): min=22800, max=29568, per=23.47%, avg=25209.60, stdev=2644.16, samples=5 00:09:25.830 iops : min= 5700, max= 7392, avg=6302.40, stdev=661.04, samples=5 00:09:25.830 lat (usec) : 100=14.12%, 250=85.84%, 500=0.03% 00:09:25.830 cpu : usr=2.59%, sys=7.06%, ctx=18791, majf=0, minf=2 00:09:25.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.830 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.831 issued rwts: total=18788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.831 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1553280: Wed Nov 20 11:32:29 2024 00:09:25.831 read: IOPS=9303, BW=36.3MiB/s (38.1MB/s)(98.9MiB/2721msec) 00:09:25.831 slat (nsec): min=8263, max=44778, avg=9210.96, stdev=1143.27 00:09:25.831 clat (usec): min=62, max=322, avg=95.65, stdev=10.01 00:09:25.831 lat (usec): min=75, max=331, avg=104.86, stdev=10.10 00:09:25.831 clat percentiles (usec): 00:09:25.831 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:09:25.831 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:09:25.831 | 70.00th=[ 99], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 109], 00:09:25.831 | 99.00th=[ 137], 99.50th=[ 153], 99.90th=[ 196], 99.95th=[ 200], 00:09:25.831 | 99.99th=[ 269] 00:09:25.831 bw ( KiB/s): min=37120, max=38448, per=35.16%, avg=37772.80, stdev=603.50, samples=5 00:09:25.831 iops : min= 9280, max= 9612, avg=9443.20, stdev=150.87, samples=5 00:09:25.831 lat (usec) : 100=76.24%, 250=23.74%, 500=0.01% 00:09:25.831 cpu : usr=3.79%, sys=10.00%, ctx=25316, majf=0, minf=2 00:09:25.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.832 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.832 issued rwts: total=25316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.832 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.832 00:09:25.832 Run status group 0 (all jobs): 00:09:25.832 READ: bw=105MiB/s (110MB/s), 25.0MiB/s-36.3MiB/s (26.2MB/s-38.1MB/s), io=351MiB (368MB), run=2721-3350msec 00:09:25.832 00:09:25.832 Disk stats (read/write): 00:09:25.832 nvme0n1: ios=21337/0, merge=0/0, ticks=2634/0, in_queue=2634, util=92.88% 00:09:25.832 nvme0n2: ios=24381/0, merge=0/0, ticks=2860/0, in_queue=2860, util=93.99% 00:09:25.832 nvme0n3: ios=18306/0, merge=0/0, ticks=2518/0, in_queue=2518, util=95.46% 00:09:25.832 nvme0n4: ios=24430/0, merge=0/0, ticks=2185/0, in_queue=2185, util=96.44% 00:09:26.090 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.090 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:26.349 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.349 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:26.349 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.349 11:32:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:26.607 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.607 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:26.864 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:26.864 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1553164 00:09:26.864 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:26.864 11:32:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:27.795 nvmf hotplug test: fio failed as expected 00:09:27.795 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:09:28.052 rmmod nvme_rdma 00:09:28.052 rmmod nvme_fabrics 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1550805 ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1550805 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1550805 ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1550805 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1550805 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1550805' 00:09:28.052 killing process with pid 1550805 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1550805 00:09:28.052 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1550805 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:28.619 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:09:28.619 00:09:28.619 real 0m26.401s 00:09:28.619 user 1m37.355s 00:09:28.619 sys 0m10.333s 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:28.620 ************************************ 00:09:28.620 END TEST nvmf_fio_target 00:09:28.620 ************************************ 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.620 ************************************ 00:09:28.620 START TEST nvmf_bdevio 00:09:28.620 ************************************ 00:09:28.620 11:32:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:09:28.620 * Looking for test storage... 00:09:28.620 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:28.620 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.878 --rc genhtml_branch_coverage=1 00:09:28.878 --rc genhtml_function_coverage=1 00:09:28.878 --rc genhtml_legend=1 00:09:28.878 --rc geninfo_all_blocks=1 00:09:28.878 --rc geninfo_unexecuted_blocks=1 00:09:28.878 00:09:28.878 ' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.878 --rc genhtml_branch_coverage=1 00:09:28.878 --rc genhtml_function_coverage=1 00:09:28.878 --rc genhtml_legend=1 00:09:28.878 --rc geninfo_all_blocks=1 00:09:28.878 --rc geninfo_unexecuted_blocks=1 00:09:28.878 00:09:28.878 ' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.878 --rc genhtml_branch_coverage=1 00:09:28.878 --rc genhtml_function_coverage=1 00:09:28.878 --rc genhtml_legend=1 00:09:28.878 --rc geninfo_all_blocks=1 00:09:28.878 --rc geninfo_unexecuted_blocks=1 00:09:28.878 00:09:28.878 ' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.878 --rc genhtml_branch_coverage=1 00:09:28.878 --rc genhtml_function_coverage=1 00:09:28.878 --rc genhtml_legend=1 00:09:28.878 --rc geninfo_all_blocks=1 00:09:28.878 --rc geninfo_unexecuted_blocks=1 00:09:28.878 00:09:28.878 ' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.878 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:28.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:09:28.879 11:32:32 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:35.436 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:35.436 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:35.436 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:35.437 Found net devices under 0000:18:00.0: mlx_0_0 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:35.437 Found net devices under 0000:18:00.1: mlx_0_1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # get_rdma_if_list 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # rdma_devs=() 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@89 -- # continue 2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@89 -- # continue 2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@61 -- # uname 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_cm 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_core 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_umad 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe iw_cm 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # key_initiator=target1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:09:35.437 10.0.0.1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:09:35.437 10.0.0.2 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:09:35.437 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:35.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:09:35.438 00:09:35.438 --- 10.0.0.2 ping statistics --- 00:09:35.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.438 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:35.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.026 ms 00:09:35.438 00:09:35.438 --- 10.0.0.2 ping statistics --- 00:09:35.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.438 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:35.438 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1556967 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1556967 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1556967 ']' 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 [2024-11-20 11:32:38.509305] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:35.439 [2024-11-20 11:32:38.509362] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.439 [2024-11-20 11:32:38.588726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.439 [2024-11-20 11:32:38.637486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.439 [2024-11-20 11:32:38.637528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.439 [2024-11-20 11:32:38.637538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.439 [2024-11-20 11:32:38.637546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.439 [2024-11-20 11:32:38.637554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.439 [2024-11-20 11:32:38.638940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.439 [2024-11-20 11:32:38.639091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:35.439 [2024-11-20 11:32:38.639140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.439 [2024-11-20 11:32:38.639140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.439 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.439 [2024-11-20 11:32:38.819672] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a5b20/0x21aa010) succeed. 00:09:35.439 [2024-11-20 11:32:38.828808] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a71b0/0x21eb6b0) succeed. 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 Malloc0 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.697 11:32:38 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.697 [2024-11-20 11:32:39.014061] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:35.697 { 00:09:35.697 "params": { 00:09:35.697 "name": "Nvme$subsystem", 00:09:35.697 "trtype": "$TEST_TRANSPORT", 00:09:35.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.697 "adrfam": "ipv4", 00:09:35.697 "trsvcid": "$NVMF_PORT", 00:09:35.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.697 "hdgst": ${hdgst:-false}, 00:09:35.697 "ddgst": ${ddgst:-false} 00:09:35.697 }, 00:09:35.697 "method": "bdev_nvme_attach_controller" 00:09:35.697 } 00:09:35.697 EOF 00:09:35.697 )") 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:09:35.697 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:35.697 "params": { 00:09:35.697 "name": "Nvme1", 00:09:35.697 "trtype": "rdma", 00:09:35.697 "traddr": "10.0.0.2", 00:09:35.697 "adrfam": "ipv4", 00:09:35.697 "trsvcid": "4420", 00:09:35.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.697 "hdgst": false, 00:09:35.697 "ddgst": false 00:09:35.697 }, 00:09:35.697 "method": "bdev_nvme_attach_controller" 00:09:35.697 }' 00:09:35.697 [2024-11-20 11:32:39.067318] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:35.697 [2024-11-20 11:32:39.067374] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557158 ] 00:09:35.697 [2024-11-20 11:32:39.145967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.953 [2024-11-20 11:32:39.195305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.953 [2024-11-20 11:32:39.195390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.953 [2024-11-20 11:32:39.195393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.953 I/O targets: 00:09:35.953 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:35.953 00:09:35.953 00:09:35.953 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.953 http://cunit.sourceforge.net/ 00:09:35.953 00:09:35.953 00:09:35.953 Suite: bdevio tests on: Nvme1n1 00:09:35.953 Test: blockdev write read block ...passed 00:09:35.953 Test: blockdev write zeroes read block ...passed 00:09:35.953 Test: blockdev write zeroes read no split ...passed 00:09:35.953 Test: blockdev write zeroes read split ...passed 00:09:35.953 Test: blockdev write zeroes read split partial ...passed 00:09:35.953 Test: blockdev reset ...[2024-11-20 11:32:39.409296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:36.211 [2024-11-20 11:32:39.431958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:36.211 [2024-11-20 11:32:39.458821] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:36.211 passed 00:09:36.211 Test: blockdev write read 8 blocks ...passed 00:09:36.211 Test: blockdev write read size > 128k ...passed 00:09:36.211 Test: blockdev write read invalid size ...passed 00:09:36.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:36.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:36.211 Test: blockdev write read max offset ...passed 00:09:36.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:36.211 Test: blockdev writev readv 8 blocks ...passed 00:09:36.211 Test: blockdev writev readv 30 x 1block ...passed 00:09:36.211 Test: blockdev writev readv block ...passed 00:09:36.211 Test: blockdev writev readv size > 128k ...passed 00:09:36.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:36.211 Test: blockdev comparev and writev ...[2024-11-20 11:32:39.461784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.461813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.461827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.461836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:36.211 [2024-11-20 11:32:39.462458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:36.211 passed 00:09:36.211 Test: blockdev nvme passthru rw ...passed 00:09:36.211 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:32:39.462725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:36.211 [2024-11-20 11:32:39.462741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:36.211 [2024-11-20 11:32:39.462796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:36.211 [2024-11-20 11:32:39.462853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:36.211 [2024-11-20 11:32:39.462894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:36.211 [2024-11-20 11:32:39.462904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:36.211 passed 00:09:36.211 Test: blockdev nvme admin passthru ...passed 00:09:36.211 Test: blockdev copy ...passed 00:09:36.211 00:09:36.211 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.211 suites 1 1 n/a 0 0 00:09:36.211 tests 23 23 23 0 0 00:09:36.211 asserts 152 152 152 0 n/a 00:09:36.211 00:09:36.211 Elapsed time = 0.172 seconds 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:36.211 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:09:36.211 rmmod nvme_rdma 00:09:36.211 rmmod nvme_fabrics 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1556967 ']' 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1556967 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1556967 ']' 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1556967 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1556967 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1556967' 00:09:36.469 killing process with pid 1556967 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1556967 00:09:36.469 11:32:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1556967 00:09:36.728 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:36.728 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:09:36.728 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:09:36.729 00:09:36.729 real 0m8.191s 00:09:36.729 user 0m8.527s 00:09:36.729 sys 0m5.451s 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:36.729 ************************************ 00:09:36.729 END TEST nvmf_bdevio 00:09:36.729 ************************************ 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ rdma == \t\c\p ]] 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:09:36.729 00:09:36.729 real 3m50.116s 00:09:36.729 user 10m13.894s 00:09:36.729 sys 1m21.077s 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.729 11:32:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.729 ************************************ 00:09:36.729 END TEST nvmf_target_core 00:09:36.729 ************************************ 00:09:36.729 11:32:40 nvmf_rdma -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:36.729 11:32:40 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.729 11:32:40 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.729 11:32:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:36.988 ************************************ 00:09:36.988 START TEST nvmf_target_extra 00:09:36.988 ************************************ 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:36.988 * Looking for test storage... 00:09:36.988 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.988 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.989 --rc genhtml_branch_coverage=1 00:09:36.989 --rc genhtml_function_coverage=1 00:09:36.989 --rc genhtml_legend=1 00:09:36.989 --rc geninfo_all_blocks=1 00:09:36.989 --rc geninfo_unexecuted_blocks=1 00:09:36.989 00:09:36.989 ' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.989 --rc genhtml_branch_coverage=1 00:09:36.989 --rc genhtml_function_coverage=1 00:09:36.989 --rc genhtml_legend=1 00:09:36.989 --rc geninfo_all_blocks=1 00:09:36.989 --rc geninfo_unexecuted_blocks=1 00:09:36.989 00:09:36.989 ' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.989 --rc genhtml_branch_coverage=1 00:09:36.989 --rc genhtml_function_coverage=1 00:09:36.989 --rc genhtml_legend=1 00:09:36.989 --rc geninfo_all_blocks=1 00:09:36.989 --rc geninfo_unexecuted_blocks=1 00:09:36.989 00:09:36.989 ' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.989 --rc genhtml_branch_coverage=1 00:09:36.989 --rc genhtml_function_coverage=1 00:09:36.989 --rc genhtml_legend=1 00:09:36.989 --rc geninfo_all_blocks=1 00:09:36.989 --rc geninfo_unexecuted_blocks=1 00:09:36.989 00:09:36.989 ' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:36.989 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:36.989 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.990 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.990 11:32:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:37.249 ************************************ 00:09:37.249 START TEST nvmf_example 00:09:37.249 ************************************ 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:37.249 * Looking for test storage... 00:09:37.249 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.249 --rc genhtml_branch_coverage=1 00:09:37.249 --rc genhtml_function_coverage=1 00:09:37.249 --rc genhtml_legend=1 00:09:37.249 --rc geninfo_all_blocks=1 00:09:37.249 --rc geninfo_unexecuted_blocks=1 00:09:37.249 00:09:37.249 ' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.249 --rc genhtml_branch_coverage=1 00:09:37.249 --rc genhtml_function_coverage=1 00:09:37.249 --rc genhtml_legend=1 00:09:37.249 --rc geninfo_all_blocks=1 00:09:37.249 --rc geninfo_unexecuted_blocks=1 00:09:37.249 00:09:37.249 ' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.249 --rc genhtml_branch_coverage=1 00:09:37.249 --rc genhtml_function_coverage=1 00:09:37.249 --rc genhtml_legend=1 00:09:37.249 --rc geninfo_all_blocks=1 00:09:37.249 --rc geninfo_unexecuted_blocks=1 00:09:37.249 00:09:37.249 ' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.249 --rc genhtml_branch_coverage=1 00:09:37.249 --rc genhtml_function_coverage=1 00:09:37.249 --rc genhtml_legend=1 00:09:37.249 --rc geninfo_all_blocks=1 00:09:37.249 --rc geninfo_unexecuted_blocks=1 00:09:37.249 00:09:37.249 ' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.249 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:37.250 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:09:37.250 11:32:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.816 11:32:46 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.816 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:43.817 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:43.817 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:43.817 Found net devices under 0000:18:00.0: mlx_0_0 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:43.817 Found net devices under 0000:18:00.1: mlx_0_1 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # get_rdma_if_list 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # rdma_devs=() 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@89 -- # continue 2 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@89 -- # continue 2 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@61 -- # uname 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_cm 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_core 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_umad 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe iw_cm 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # key_initiator=target1 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:43.817 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:09:43.818 10.0.0.1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:09:43.818 10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:43.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:09:43.818 00:09:43.818 --- 10.0.0.2 ping statistics --- 00:09:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.818 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:43.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:09:43.818 00:09:43.818 --- 10.0.0.2 ping statistics --- 00:09:43.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.818 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:09:43.818 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:09:43.819 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1560315 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1560315 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1560315 ']' 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.078 11:32:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:45.011 11:32:48 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:57.271 Initializing NVMe Controllers 00:09:57.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.271 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.271 Initialization complete. Launching workers. 00:09:57.271 ======================================================== 00:09:57.271 Latency(us) 00:09:57.272 Device Information : IOPS MiB/s Average min max 00:09:57.272 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 23841.09 93.13 2683.96 651.17 14044.62 00:09:57.272 ======================================================== 00:09:57.272 Total : 23841.09 93.13 2683.96 651.17 14044.62 00:09:57.272 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:09:57.272 rmmod nvme_rdma 00:09:57.272 rmmod nvme_fabrics 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 1560315 ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 1560315 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1560315 ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1560315 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1560315 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1560315' 00:09:57.272 killing process with pid 1560315 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1560315 00:09:57.272 11:32:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1560315 00:09:57.272 nvmf threads initialize successfully 00:09:57.272 bdev subsystem init successfully 00:09:57.272 created a nvmf target service 00:09:57.272 create targets's poll groups done 00:09:57.272 all subsystems of target started 00:09:57.272 nvmf target is running 00:09:57.272 all subsystems of target stopped 00:09:57.272 destroy targets's poll groups done 00:09:57.272 destroyed the nvmf target service 00:09:57.272 bdev subsystem finish successfully 00:09:57.272 nvmf threads destroy successfully 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@264 -- # local dev 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # return 0 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@284 -- # iptr 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-save 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-restore 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.272 00:09:57.272 real 0m19.708s 00:09:57.272 user 0m52.495s 00:09:57.272 sys 0m5.575s 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.272 ************************************ 00:09:57.272 END TEST nvmf_example 00:09:57.272 ************************************ 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:57.272 ************************************ 00:09:57.272 START TEST nvmf_filesystem 00:09:57.272 ************************************ 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:57.272 * Looking for test storage... 00:09:57.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:57.272 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.273 --rc genhtml_branch_coverage=1 00:09:57.273 --rc genhtml_function_coverage=1 00:09:57.273 --rc genhtml_legend=1 00:09:57.273 --rc geninfo_all_blocks=1 00:09:57.273 --rc geninfo_unexecuted_blocks=1 00:09:57.273 00:09:57.273 ' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.273 --rc genhtml_branch_coverage=1 00:09:57.273 --rc genhtml_function_coverage=1 00:09:57.273 --rc genhtml_legend=1 00:09:57.273 --rc geninfo_all_blocks=1 00:09:57.273 --rc geninfo_unexecuted_blocks=1 00:09:57.273 00:09:57.273 ' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.273 --rc genhtml_branch_coverage=1 00:09:57.273 --rc genhtml_function_coverage=1 00:09:57.273 --rc genhtml_legend=1 00:09:57.273 --rc geninfo_all_blocks=1 00:09:57.273 --rc geninfo_unexecuted_blocks=1 00:09:57.273 00:09:57.273 ' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.273 --rc genhtml_branch_coverage=1 00:09:57.273 --rc genhtml_function_coverage=1 00:09:57.273 --rc genhtml_legend=1 00:09:57.273 --rc geninfo_all_blocks=1 00:09:57.273 --rc geninfo_unexecuted_blocks=1 00:09:57.273 00:09:57.273 ' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:57.273 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:57.274 #define SPDK_CONFIG_H 00:09:57.274 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:57.274 #define SPDK_CONFIG_APPS 1 00:09:57.274 #define SPDK_CONFIG_ARCH native 00:09:57.274 #undef SPDK_CONFIG_ASAN 00:09:57.274 #undef SPDK_CONFIG_AVAHI 00:09:57.274 #undef SPDK_CONFIG_CET 00:09:57.274 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:57.274 #define SPDK_CONFIG_COVERAGE 1 00:09:57.274 #define SPDK_CONFIG_CROSS_PREFIX 00:09:57.274 #undef SPDK_CONFIG_CRYPTO 00:09:57.274 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:57.274 #undef SPDK_CONFIG_CUSTOMOCF 00:09:57.274 #undef SPDK_CONFIG_DAOS 00:09:57.274 #define SPDK_CONFIG_DAOS_DIR 00:09:57.274 #define SPDK_CONFIG_DEBUG 1 00:09:57.274 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:57.274 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:57.274 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:57.274 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:57.274 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:57.274 #undef SPDK_CONFIG_DPDK_UADK 00:09:57.274 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:57.274 #define SPDK_CONFIG_EXAMPLES 1 00:09:57.274 #undef SPDK_CONFIG_FC 00:09:57.274 #define SPDK_CONFIG_FC_PATH 00:09:57.274 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:57.274 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:57.274 #define SPDK_CONFIG_FSDEV 1 00:09:57.274 #undef SPDK_CONFIG_FUSE 00:09:57.274 #undef SPDK_CONFIG_FUZZER 00:09:57.274 #define SPDK_CONFIG_FUZZER_LIB 00:09:57.274 #undef SPDK_CONFIG_GOLANG 00:09:57.274 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:57.274 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:57.274 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:57.274 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:57.274 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:57.274 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:57.274 #undef SPDK_CONFIG_HAVE_LZ4 00:09:57.274 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:57.274 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:57.274 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:57.274 #define SPDK_CONFIG_IDXD 1 00:09:57.274 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:57.274 #undef SPDK_CONFIG_IPSEC_MB 00:09:57.274 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:57.274 #define SPDK_CONFIG_ISAL 1 00:09:57.274 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:57.274 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:57.274 #define SPDK_CONFIG_LIBDIR 00:09:57.274 #undef SPDK_CONFIG_LTO 00:09:57.274 #define SPDK_CONFIG_MAX_LCORES 128 00:09:57.274 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:57.274 #define SPDK_CONFIG_NVME_CUSE 1 00:09:57.274 #undef SPDK_CONFIG_OCF 00:09:57.274 #define SPDK_CONFIG_OCF_PATH 00:09:57.274 #define SPDK_CONFIG_OPENSSL_PATH 00:09:57.274 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:57.274 #define SPDK_CONFIG_PGO_DIR 00:09:57.274 #undef SPDK_CONFIG_PGO_USE 00:09:57.274 #define SPDK_CONFIG_PREFIX /usr/local 00:09:57.274 #undef SPDK_CONFIG_RAID5F 00:09:57.274 #undef SPDK_CONFIG_RBD 00:09:57.274 #define SPDK_CONFIG_RDMA 1 00:09:57.274 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:57.274 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:57.274 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:57.274 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:57.274 #define SPDK_CONFIG_SHARED 1 00:09:57.274 #undef SPDK_CONFIG_SMA 00:09:57.274 #define SPDK_CONFIG_TESTS 1 00:09:57.274 #undef SPDK_CONFIG_TSAN 00:09:57.274 #define SPDK_CONFIG_UBLK 1 00:09:57.274 #define SPDK_CONFIG_UBSAN 1 00:09:57.274 #undef SPDK_CONFIG_UNIT_TESTS 00:09:57.274 #undef SPDK_CONFIG_URING 00:09:57.274 #define SPDK_CONFIG_URING_PATH 00:09:57.274 #undef SPDK_CONFIG_URING_ZNS 00:09:57.274 #undef SPDK_CONFIG_USDT 00:09:57.274 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:57.274 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:57.274 #undef SPDK_CONFIG_VFIO_USER 00:09:57.274 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:57.274 #define SPDK_CONFIG_VHOST 1 00:09:57.274 #define SPDK_CONFIG_VIRTIO 1 00:09:57.274 #undef SPDK_CONFIG_VTUNE 00:09:57.274 #define SPDK_CONFIG_VTUNE_DIR 00:09:57.274 #define SPDK_CONFIG_WERROR 1 00:09:57.274 #define SPDK_CONFIG_WPDK_DIR 00:09:57.274 #undef SPDK_CONFIG_XNVME 00:09:57.274 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.274 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:57.275 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.276 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j72 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1562082 ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1562082 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ap6Dp9 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ap6Dp9/tests/target /tmp/spdk.ap6Dp9 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55968079872 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61734440960 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5766361088 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/sda1 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=xfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=221821267968 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=239938535424 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=18117267456 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30853758976 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867218432 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=13459456 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12324052992 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346888192 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22835200 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30866984960 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30867222528 00:09:57.277 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=237568 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173429760 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173442048 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:57.278 * Looking for test storage... 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55968079872 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=7980953600 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.278 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.278 --rc genhtml_branch_coverage=1 00:09:57.278 --rc genhtml_function_coverage=1 00:09:57.278 --rc genhtml_legend=1 00:09:57.278 --rc geninfo_all_blocks=1 00:09:57.278 --rc geninfo_unexecuted_blocks=1 00:09:57.278 00:09:57.278 ' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.278 --rc genhtml_branch_coverage=1 00:09:57.278 --rc genhtml_function_coverage=1 00:09:57.278 --rc genhtml_legend=1 00:09:57.278 --rc geninfo_all_blocks=1 00:09:57.278 --rc geninfo_unexecuted_blocks=1 00:09:57.278 00:09:57.278 ' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.278 --rc genhtml_branch_coverage=1 00:09:57.278 --rc genhtml_function_coverage=1 00:09:57.278 --rc genhtml_legend=1 00:09:57.278 --rc geninfo_all_blocks=1 00:09:57.278 --rc geninfo_unexecuted_blocks=1 00:09:57.278 00:09:57.278 ' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.278 --rc genhtml_branch_coverage=1 00:09:57.278 --rc genhtml_function_coverage=1 00:09:57.278 --rc genhtml_legend=1 00:09:57.278 --rc geninfo_all_blocks=1 00:09:57.278 --rc geninfo_unexecuted_blocks=1 00:09:57.278 00:09:57.278 ' 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:09:57.278 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:57.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:09:57.279 11:33:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:03.841 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:03.841 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:03.841 Found net devices under 0000:18:00.0: mlx_0_0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:03.841 Found net devices under 0000:18:00.1: mlx_0_1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # get_rdma_if_list 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # rdma_devs=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@89 -- # continue 2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@89 -- # continue 2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@61 -- # uname 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_cm 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_core 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_umad 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe iw_cm 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # key_initiator=target1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:10:03.841 10.0.0.1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:10:03.841 10.0.0.2 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:10:03.841 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:03.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:10:03.842 00:10:03.842 --- 10.0.0.2 ping statistics --- 00:10:03.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.842 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:03.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:10:03.842 00:10:03.842 --- 10.0.0.2 ping statistics --- 00:10:03.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.842 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:03.842 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 ************************************ 00:10:04.101 START TEST nvmf_filesystem_no_in_capsule 00:10:04.101 ************************************ 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1565126 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1565126 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1565126 ']' 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.101 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.101 [2024-11-20 11:33:07.513983] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:04.101 [2024-11-20 11:33:07.514043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.367 [2024-11-20 11:33:07.593399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.367 [2024-11-20 11:33:07.640338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.367 [2024-11-20 11:33:07.640382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.367 [2024-11-20 11:33:07.640392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.367 [2024-11-20 11:33:07.640417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.367 [2024-11-20 11:33:07.640425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.367 [2024-11-20 11:33:07.641824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.367 [2024-11-20 11:33:07.641911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.367 [2024-11-20 11:33:07.642004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.367 [2024-11-20 11:33:07.642006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.367 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.367 [2024-11-20 11:33:07.796023] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:04.367 [2024-11-20 11:33:07.816391] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb52220/0xb56710) succeed. 00:10:04.367 [2024-11-20 11:33:07.825517] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb538b0/0xb97db0) succeed. 00:10:04.624 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.624 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:04.624 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.624 11:33:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.624 Malloc1 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.624 [2024-11-20 11:33:08.095559] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:10:04.624 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:04.882 { 00:10:04.882 "name": "Malloc1", 00:10:04.882 "aliases": [ 00:10:04.882 "e23da8ea-106e-4c54-b1d9-cb0c24a9d6d0" 00:10:04.882 ], 00:10:04.882 "product_name": "Malloc disk", 00:10:04.882 "block_size": 512, 00:10:04.882 "num_blocks": 1048576, 00:10:04.882 "uuid": "e23da8ea-106e-4c54-b1d9-cb0c24a9d6d0", 00:10:04.882 "assigned_rate_limits": { 00:10:04.882 "rw_ios_per_sec": 0, 00:10:04.882 "rw_mbytes_per_sec": 0, 00:10:04.882 "r_mbytes_per_sec": 0, 00:10:04.882 "w_mbytes_per_sec": 0 00:10:04.882 }, 00:10:04.882 "claimed": true, 00:10:04.882 "claim_type": "exclusive_write", 00:10:04.882 "zoned": false, 00:10:04.882 "supported_io_types": { 00:10:04.882 "read": true, 00:10:04.882 "write": true, 00:10:04.882 "unmap": true, 00:10:04.882 "flush": true, 00:10:04.882 "reset": true, 00:10:04.882 "nvme_admin": false, 00:10:04.882 "nvme_io": false, 00:10:04.882 "nvme_io_md": false, 00:10:04.882 "write_zeroes": true, 00:10:04.882 "zcopy": true, 00:10:04.882 "get_zone_info": false, 00:10:04.882 "zone_management": false, 00:10:04.882 "zone_append": false, 00:10:04.882 "compare": false, 00:10:04.882 "compare_and_write": false, 00:10:04.882 "abort": true, 00:10:04.882 "seek_hole": false, 00:10:04.882 "seek_data": false, 00:10:04.882 "copy": true, 00:10:04.882 "nvme_iov_md": false 00:10:04.882 }, 00:10:04.882 "memory_domains": [ 00:10:04.882 { 00:10:04.882 "dma_device_id": "system", 00:10:04.882 "dma_device_type": 1 00:10:04.882 }, 00:10:04.882 { 00:10:04.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.882 "dma_device_type": 2 00:10:04.882 } 00:10:04.882 ], 00:10:04.882 "driver_specific": {} 00:10:04.882 } 00:10:04.882 ]' 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:04.882 11:33:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.817 11:33:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.817 11:33:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.817 11:33:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.817 11:33:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.817 11:33:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:08.340 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:08.340 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:08.341 11:33:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.272 ************************************ 00:10:09.272 START TEST filesystem_ext4 00:10:09.272 ************************************ 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:09.272 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:09.272 mke2fs 1.47.0 (5-Feb-2023) 00:10:09.272 Discarding device blocks: 0/522240 done 00:10:09.272 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:09.273 Filesystem UUID: 61886e00-a79d-4c1a-a2ad-364327b5ad64 00:10:09.273 Superblock backups stored on blocks: 00:10:09.273 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:09.273 00:10:09.273 Allocating group tables: 0/64 done 00:10:09.273 Writing inode tables: 0/64 done 00:10:09.273 Creating journal (8192 blocks): done 00:10:09.273 Writing superblocks and filesystem accounting information: 0/64 done 00:10:09.273 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1565126 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.273 00:10:09.273 real 0m0.230s 00:10:09.273 user 0m0.038s 00:10:09.273 sys 0m0.075s 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:09.273 ************************************ 00:10:09.273 END TEST filesystem_ext4 00:10:09.273 ************************************ 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.273 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.530 ************************************ 00:10:09.530 START TEST filesystem_btrfs 00:10:09.530 ************************************ 00:10:09.530 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:09.530 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:09.530 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.530 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:09.530 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:09.531 btrfs-progs v6.8.1 00:10:09.531 See https://btrfs.readthedocs.io for more information. 00:10:09.531 00:10:09.531 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:09.531 NOTE: several default settings have changed in version 5.15, please make sure 00:10:09.531 this does not affect your deployments: 00:10:09.531 - DUP for metadata (-m dup) 00:10:09.531 - enabled no-holes (-O no-holes) 00:10:09.531 - enabled free-space-tree (-R free-space-tree) 00:10:09.531 00:10:09.531 Label: (null) 00:10:09.531 UUID: d43abf4c-24a1-4906-8110-c1e077d4811e 00:10:09.531 Node size: 16384 00:10:09.531 Sector size: 4096 (CPU page size: 4096) 00:10:09.531 Filesystem size: 510.00MiB 00:10:09.531 Block group profiles: 00:10:09.531 Data: single 8.00MiB 00:10:09.531 Metadata: DUP 32.00MiB 00:10:09.531 System: DUP 8.00MiB 00:10:09.531 SSD detected: yes 00:10:09.531 Zoned device: no 00:10:09.531 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:09.531 Checksum: crc32c 00:10:09.531 Number of devices: 1 00:10:09.531 Devices: 00:10:09.531 ID SIZE PATH 00:10:09.531 1 510.00MiB /dev/nvme0n1p1 00:10:09.531 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:09.531 11:33:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.531 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1565126 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.789 00:10:09.789 real 0m0.299s 00:10:09.789 user 0m0.040s 00:10:09.789 sys 0m0.170s 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:09.789 ************************************ 00:10:09.789 END TEST filesystem_btrfs 00:10:09.789 ************************************ 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.789 ************************************ 00:10:09.789 START TEST filesystem_xfs 00:10:09.789 ************************************ 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:09.789 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:09.789 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:09.789 = sectsz=512 attr=2, projid32bit=1 00:10:09.789 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:09.789 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:09.789 data = bsize=4096 blocks=130560, imaxpct=25 00:10:09.789 = sunit=0 swidth=0 blks 00:10:09.789 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:09.789 log =internal log bsize=4096 blocks=16384, version=2 00:10:09.789 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:09.789 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:10.047 Discarding blocks...Done. 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1565126 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.047 00:10:10.047 real 0m0.245s 00:10:10.047 user 0m0.027s 00:10:10.047 sys 0m0.102s 00:10:10.047 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.048 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:10.048 ************************************ 00:10:10.048 END TEST filesystem_xfs 00:10:10.048 ************************************ 00:10:10.048 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:10.048 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:10.048 11:33:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.980 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.980 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.980 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.980 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1565126 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1565126 ']' 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1565126 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565126 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565126' 00:10:11.239 killing process with pid 1565126 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1565126 00:10:11.239 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1565126 00:10:11.497 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:11.497 00:10:11.497 real 0m7.518s 00:10:11.497 user 0m29.264s 00:10:11.497 sys 0m1.315s 00:10:11.756 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.756 11:33:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 ************************************ 00:10:11.756 END TEST nvmf_filesystem_no_in_capsule 00:10:11.756 ************************************ 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 ************************************ 00:10:11.756 START TEST nvmf_filesystem_in_capsule 00:10:11.756 ************************************ 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1566757 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1566757 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1566757 ']' 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.756 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.756 [2024-11-20 11:33:15.124701] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:11.756 [2024-11-20 11:33:15.124761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.756 [2024-11-20 11:33:15.201788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.014 [2024-11-20 11:33:15.251442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.014 [2024-11-20 11:33:15.251486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.014 [2024-11-20 11:33:15.251497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.014 [2024-11-20 11:33:15.251505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.014 [2024-11-20 11:33:15.251512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.014 [2024-11-20 11:33:15.252947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.014 [2024-11-20 11:33:15.253045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.014 [2024-11-20 11:33:15.253096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.014 [2024-11-20 11:33:15.253098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.014 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.014 [2024-11-20 11:33:15.435311] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e45220/0x1e49710) succeed. 00:10:12.014 [2024-11-20 11:33:15.444585] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e468b0/0x1e8adb0) succeed. 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.272 Malloc1 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.272 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.273 [2024-11-20 11:33:15.729407] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.273 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:12.531 { 00:10:12.531 "name": "Malloc1", 00:10:12.531 "aliases": [ 00:10:12.531 "0a944c0a-161b-4fea-be06-cc234b6ba055" 00:10:12.531 ], 00:10:12.531 "product_name": "Malloc disk", 00:10:12.531 "block_size": 512, 00:10:12.531 "num_blocks": 1048576, 00:10:12.531 "uuid": "0a944c0a-161b-4fea-be06-cc234b6ba055", 00:10:12.531 "assigned_rate_limits": { 00:10:12.531 "rw_ios_per_sec": 0, 00:10:12.531 "rw_mbytes_per_sec": 0, 00:10:12.531 "r_mbytes_per_sec": 0, 00:10:12.531 "w_mbytes_per_sec": 0 00:10:12.531 }, 00:10:12.531 "claimed": true, 00:10:12.531 "claim_type": "exclusive_write", 00:10:12.531 "zoned": false, 00:10:12.531 "supported_io_types": { 00:10:12.531 "read": true, 00:10:12.531 "write": true, 00:10:12.531 "unmap": true, 00:10:12.531 "flush": true, 00:10:12.531 "reset": true, 00:10:12.531 "nvme_admin": false, 00:10:12.531 "nvme_io": false, 00:10:12.531 "nvme_io_md": false, 00:10:12.531 "write_zeroes": true, 00:10:12.531 "zcopy": true, 00:10:12.531 "get_zone_info": false, 00:10:12.531 "zone_management": false, 00:10:12.531 "zone_append": false, 00:10:12.531 "compare": false, 00:10:12.531 "compare_and_write": false, 00:10:12.531 "abort": true, 00:10:12.531 "seek_hole": false, 00:10:12.531 "seek_data": false, 00:10:12.531 "copy": true, 00:10:12.531 "nvme_iov_md": false 00:10:12.531 }, 00:10:12.531 "memory_domains": [ 00:10:12.531 { 00:10:12.531 "dma_device_id": "system", 00:10:12.531 "dma_device_type": 1 00:10:12.531 }, 00:10:12.531 { 00:10:12.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.531 "dma_device_type": 2 00:10:12.531 } 00:10:12.531 ], 00:10:12.531 "driver_specific": {} 00:10:12.531 } 00:10:12.531 ]' 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.531 11:33:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.463 11:33:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.463 11:33:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:13.463 11:33:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.463 11:33:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:13.463 11:33:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:15.988 11:33:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:15.988 11:33:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.919 ************************************ 00:10:16.919 START TEST filesystem_in_capsule_ext4 00:10:16.919 ************************************ 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:16.919 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:16.919 mke2fs 1.47.0 (5-Feb-2023) 00:10:16.919 Discarding device blocks: 0/522240 done 00:10:16.920 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:16.920 Filesystem UUID: b53ad761-3c19-4b89-a708-e9f708db904a 00:10:16.920 Superblock backups stored on blocks: 00:10:16.920 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:16.920 00:10:16.920 Allocating group tables: 0/64 done 00:10:16.920 Writing inode tables: 0/64 done 00:10:16.920 Creating journal (8192 blocks): done 00:10:16.920 Writing superblocks and filesystem accounting information: 0/64 done 00:10:16.920 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:16.920 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1566757 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.178 00:10:17.178 real 0m0.253s 00:10:17.178 user 0m0.043s 00:10:17.178 sys 0m0.101s 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:17.178 ************************************ 00:10:17.178 END TEST filesystem_in_capsule_ext4 00:10:17.178 ************************************ 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.178 ************************************ 00:10:17.178 START TEST filesystem_in_capsule_btrfs 00:10:17.178 ************************************ 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:17.178 btrfs-progs v6.8.1 00:10:17.178 See https://btrfs.readthedocs.io for more information. 00:10:17.178 00:10:17.178 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:17.178 NOTE: several default settings have changed in version 5.15, please make sure 00:10:17.178 this does not affect your deployments: 00:10:17.178 - DUP for metadata (-m dup) 00:10:17.178 - enabled no-holes (-O no-holes) 00:10:17.178 - enabled free-space-tree (-R free-space-tree) 00:10:17.178 00:10:17.178 Label: (null) 00:10:17.178 UUID: ff447659-3886-4139-8696-ef1d36fdb222 00:10:17.178 Node size: 16384 00:10:17.178 Sector size: 4096 (CPU page size: 4096) 00:10:17.178 Filesystem size: 510.00MiB 00:10:17.178 Block group profiles: 00:10:17.178 Data: single 8.00MiB 00:10:17.178 Metadata: DUP 32.00MiB 00:10:17.178 System: DUP 8.00MiB 00:10:17.178 SSD detected: yes 00:10:17.178 Zoned device: no 00:10:17.178 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:17.178 Checksum: crc32c 00:10:17.178 Number of devices: 1 00:10:17.178 Devices: 00:10:17.178 ID SIZE PATH 00:10:17.178 1 510.00MiB /dev/nvme0n1p1 00:10:17.178 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.178 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.436 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.436 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:17.436 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.436 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:17.436 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1566757 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.437 00:10:17.437 real 0m0.260s 00:10:17.437 user 0m0.027s 00:10:17.437 sys 0m0.128s 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:17.437 ************************************ 00:10:17.437 END TEST filesystem_in_capsule_btrfs 00:10:17.437 ************************************ 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.437 ************************************ 00:10:17.437 START TEST filesystem_in_capsule_xfs 00:10:17.437 ************************************ 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:17.437 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:17.695 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:17.695 = sectsz=512 attr=2, projid32bit=1 00:10:17.695 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:17.695 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:17.695 data = bsize=4096 blocks=130560, imaxpct=25 00:10:17.695 = sunit=0 swidth=0 blks 00:10:17.695 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:17.695 log =internal log bsize=4096 blocks=16384, version=2 00:10:17.695 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:17.695 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:17.695 Discarding blocks...Done. 00:10:17.695 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:17.695 11:33:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1566757 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.695 00:10:17.695 real 0m0.226s 00:10:17.695 user 0m0.027s 00:10:17.695 sys 0m0.083s 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:17.695 ************************************ 00:10:17.695 END TEST filesystem_in_capsule_xfs 00:10:17.695 ************************************ 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:17.695 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:17.952 11:33:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1566757 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1566757 ']' 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1566757 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566757 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566757' 00:10:18.886 killing process with pid 1566757 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1566757 00:10:18.886 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1566757 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:19.453 00:10:19.453 real 0m7.588s 00:10:19.453 user 0m29.469s 00:10:19.453 sys 0m1.339s 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.453 ************************************ 00:10:19.453 END TEST nvmf_filesystem_in_capsule 00:10:19.453 ************************************ 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:10:19.453 rmmod nvme_rdma 00:10:19.453 rmmod nvme_fabrics 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@264 -- # local dev 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # return 0 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@284 -- # iptr 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-save 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-restore 00:10:19.453 00:10:19.453 real 0m22.500s 00:10:19.453 user 1m0.938s 00:10:19.453 sys 0m8.028s 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.453 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.453 ************************************ 00:10:19.454 END TEST nvmf_filesystem 00:10:19.454 ************************************ 00:10:19.454 11:33:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:19.454 11:33:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.454 11:33:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.454 11:33:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.454 ************************************ 00:10:19.454 START TEST nvmf_target_discovery 00:10:19.454 ************************************ 00:10:19.454 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:10:19.713 * Looking for test storage... 00:10:19.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.713 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.713 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.713 11:33:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.713 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.714 --rc genhtml_branch_coverage=1 00:10:19.714 --rc genhtml_function_coverage=1 00:10:19.714 --rc genhtml_legend=1 00:10:19.714 --rc geninfo_all_blocks=1 00:10:19.714 --rc geninfo_unexecuted_blocks=1 00:10:19.714 00:10:19.714 ' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.714 --rc genhtml_branch_coverage=1 00:10:19.714 --rc genhtml_function_coverage=1 00:10:19.714 --rc genhtml_legend=1 00:10:19.714 --rc geninfo_all_blocks=1 00:10:19.714 --rc geninfo_unexecuted_blocks=1 00:10:19.714 00:10:19.714 ' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.714 --rc genhtml_branch_coverage=1 00:10:19.714 --rc genhtml_function_coverage=1 00:10:19.714 --rc genhtml_legend=1 00:10:19.714 --rc geninfo_all_blocks=1 00:10:19.714 --rc geninfo_unexecuted_blocks=1 00:10:19.714 00:10:19.714 ' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.714 --rc genhtml_branch_coverage=1 00:10:19.714 --rc genhtml_function_coverage=1 00:10:19.714 --rc genhtml_legend=1 00:10:19.714 --rc geninfo_all_blocks=1 00:10:19.714 --rc geninfo_unexecuted_blocks=1 00:10:19.714 00:10:19.714 ' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:19.714 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:10:19.714 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:10:19.715 11:33:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:26.279 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:26.279 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:26.279 Found net devices under 0000:18:00.0: mlx_0_0 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:26.279 Found net devices under 0000:18:00.1: mlx_0_1 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # get_rdma_if_list 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # rdma_devs=() 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:10:26.279 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@89 -- # continue 2 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@89 -- # continue 2 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@61 -- # uname 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_cm 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_core 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_umad 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe iw_cm 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # key_initiator=target1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:10:26.280 10.0.0.1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:26.280 10.0.0.2 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:26.280 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:26.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:10:26.281 00:10:26.281 --- 10.0.0.2 ping statistics --- 00:10:26.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.281 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:26.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:10:26.281 00:10:26.281 --- 10.0.0.2 ping statistics --- 00:10:26.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.281 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:26.281 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=1570855 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 1570855 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1570855 ']' 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.282 11:33:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.282 [2024-11-20 11:33:29.652610] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:26.282 [2024-11-20 11:33:29.652666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.282 [2024-11-20 11:33:29.732468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.540 [2024-11-20 11:33:29.782749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.540 [2024-11-20 11:33:29.782790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.540 [2024-11-20 11:33:29.782799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.540 [2024-11-20 11:33:29.782809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.540 [2024-11-20 11:33:29.782816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.540 [2024-11-20 11:33:29.784165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.540 [2024-11-20 11:33:29.784278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.540 [2024-11-20 11:33:29.784370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.540 [2024-11-20 11:33:29.784372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.107 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 [2024-11-20 11:33:30.591349] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2446220/0x244a710) succeed. 00:10:27.366 [2024-11-20 11:33:30.600501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24478b0/0x248bdb0) succeed. 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 Null1 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 [2024-11-20 11:33:30.779324] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 Null2 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 10.0.0.2 -s 4420 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 Null3 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.367 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 10.0.0.2 -s 4420 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 Null4 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 10.0.0.2 -s 4420 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 10.0.0.2 -s 4430 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.626 11:33:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 4420 00:10:27.626 00:10:27.626 Discovery Log Number of Records 6, Generation counter 6 00:10:27.626 =====Discovery Log Entry 0====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: current discovery subsystem 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4420 00:10:27.626 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: explicit discovery connections, duplicate discovery information 00:10:27.626 rdma_prtype: not specified 00:10:27.626 rdma_qptype: connected 00:10:27.626 rdma_cms: rdma-cm 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 =====Discovery Log Entry 1====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: nvme subsystem 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4420 00:10:27.626 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: none 00:10:27.626 rdma_prtype: not specified 00:10:27.626 rdma_qptype: connected 00:10:27.626 rdma_cms: rdma-cm 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 =====Discovery Log Entry 2====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: nvme subsystem 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4420 00:10:27.626 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: none 00:10:27.626 rdma_prtype: not specified 00:10:27.626 rdma_qptype: connected 00:10:27.626 rdma_cms: rdma-cm 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 =====Discovery Log Entry 3====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: nvme subsystem 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4420 00:10:27.626 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: none 00:10:27.626 rdma_prtype: not specified 00:10:27.626 rdma_qptype: connected 00:10:27.626 rdma_cms: rdma-cm 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 =====Discovery Log Entry 4====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: nvme subsystem 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4420 00:10:27.626 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: none 00:10:27.626 rdma_prtype: not specified 00:10:27.626 rdma_qptype: connected 00:10:27.626 rdma_cms: rdma-cm 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 =====Discovery Log Entry 5====== 00:10:27.626 trtype: rdma 00:10:27.626 adrfam: ipv4 00:10:27.626 subtype: discovery subsystem referral 00:10:27.626 treq: not required 00:10:27.626 portid: 0 00:10:27.626 trsvcid: 4430 00:10:27.626 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:27.626 traddr: 10.0.0.2 00:10:27.626 eflags: none 00:10:27.626 rdma_prtype: unrecognized 00:10:27.626 rdma_qptype: unrecognized 00:10:27.626 rdma_cms: unrecognized 00:10:27.626 rdma_pkey: 0x0000 00:10:27.626 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:27.626 Perform nvmf subsystem discovery via RPC 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 [ 00:10:27.627 { 00:10:27.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:27.627 "subtype": "Discovery", 00:10:27.627 "listen_addresses": [ 00:10:27.627 { 00:10:27.627 "trtype": "RDMA", 00:10:27.627 "adrfam": "IPv4", 00:10:27.627 "traddr": "10.0.0.2", 00:10:27.627 "trsvcid": "4420" 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "allow_any_host": true, 00:10:27.627 "hosts": [] 00:10:27.627 }, 00:10:27.627 { 00:10:27.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:27.627 "subtype": "NVMe", 00:10:27.627 "listen_addresses": [ 00:10:27.627 { 00:10:27.627 "trtype": "RDMA", 00:10:27.627 "adrfam": "IPv4", 00:10:27.627 "traddr": "10.0.0.2", 00:10:27.627 "trsvcid": "4420" 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "allow_any_host": true, 00:10:27.627 "hosts": [], 00:10:27.627 "serial_number": "SPDK00000000000001", 00:10:27.627 "model_number": "SPDK bdev Controller", 00:10:27.627 "max_namespaces": 32, 00:10:27.627 "min_cntlid": 1, 00:10:27.627 "max_cntlid": 65519, 00:10:27.627 "namespaces": [ 00:10:27.627 { 00:10:27.627 "nsid": 1, 00:10:27.627 "bdev_name": "Null1", 00:10:27.627 "name": "Null1", 00:10:27.627 "nguid": "2216B11598454BFB9A03561C2EC11FDF", 00:10:27.627 "uuid": "2216b115-9845-4bfb-9a03-561c2ec11fdf" 00:10:27.627 } 00:10:27.627 ] 00:10:27.627 }, 00:10:27.627 { 00:10:27.627 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:27.627 "subtype": "NVMe", 00:10:27.627 "listen_addresses": [ 00:10:27.627 { 00:10:27.627 "trtype": "RDMA", 00:10:27.627 "adrfam": "IPv4", 00:10:27.627 "traddr": "10.0.0.2", 00:10:27.627 "trsvcid": "4420" 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "allow_any_host": true, 00:10:27.627 "hosts": [], 00:10:27.627 "serial_number": "SPDK00000000000002", 00:10:27.627 "model_number": "SPDK bdev Controller", 00:10:27.627 "max_namespaces": 32, 00:10:27.627 "min_cntlid": 1, 00:10:27.627 "max_cntlid": 65519, 00:10:27.627 "namespaces": [ 00:10:27.627 { 00:10:27.627 "nsid": 1, 00:10:27.627 "bdev_name": "Null2", 00:10:27.627 "name": "Null2", 00:10:27.627 "nguid": "E4224972597245848B0C0372A95521F6", 00:10:27.627 "uuid": "e4224972-5972-4584-8b0c-0372a95521f6" 00:10:27.627 } 00:10:27.627 ] 00:10:27.627 }, 00:10:27.627 { 00:10:27.627 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:27.627 "subtype": "NVMe", 00:10:27.627 "listen_addresses": [ 00:10:27.627 { 00:10:27.627 "trtype": "RDMA", 00:10:27.627 "adrfam": "IPv4", 00:10:27.627 "traddr": "10.0.0.2", 00:10:27.627 "trsvcid": "4420" 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "allow_any_host": true, 00:10:27.627 "hosts": [], 00:10:27.627 "serial_number": "SPDK00000000000003", 00:10:27.627 "model_number": "SPDK bdev Controller", 00:10:27.627 "max_namespaces": 32, 00:10:27.627 "min_cntlid": 1, 00:10:27.627 "max_cntlid": 65519, 00:10:27.627 "namespaces": [ 00:10:27.627 { 00:10:27.627 "nsid": 1, 00:10:27.627 "bdev_name": "Null3", 00:10:27.627 "name": "Null3", 00:10:27.627 "nguid": "070809A059CE4B2BB6A6804CF4B5849C", 00:10:27.627 "uuid": "070809a0-59ce-4b2b-b6a6-804cf4b5849c" 00:10:27.627 } 00:10:27.627 ] 00:10:27.627 }, 00:10:27.627 { 00:10:27.627 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:27.627 "subtype": "NVMe", 00:10:27.627 "listen_addresses": [ 00:10:27.627 { 00:10:27.627 "trtype": "RDMA", 00:10:27.627 "adrfam": "IPv4", 00:10:27.627 "traddr": "10.0.0.2", 00:10:27.627 "trsvcid": "4420" 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "allow_any_host": true, 00:10:27.627 "hosts": [], 00:10:27.627 "serial_number": "SPDK00000000000004", 00:10:27.627 "model_number": "SPDK bdev Controller", 00:10:27.627 "max_namespaces": 32, 00:10:27.627 "min_cntlid": 1, 00:10:27.627 "max_cntlid": 65519, 00:10:27.627 "namespaces": [ 00:10:27.627 { 00:10:27.627 "nsid": 1, 00:10:27.627 "bdev_name": "Null4", 00:10:27.627 "name": "Null4", 00:10:27.627 "nguid": "8EBACBF884234BE9903B20B59E4F57DA", 00:10:27.627 "uuid": "8ebacbf8-8423-4be9-903b-20b59e4f57da" 00:10:27.627 } 00:10:27.627 ] 00:10:27.627 } 00:10:27.627 ] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.627 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 10.0.0.2 -s 4430 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:10:27.886 rmmod nvme_rdma 00:10:27.886 rmmod nvme_fabrics 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 1570855 ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 1570855 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1570855 ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1570855 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1570855 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1570855' 00:10:27.886 killing process with pid 1570855 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1570855 00:10:27.886 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1570855 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@264 -- # local dev 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # return 0 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@284 -- # iptr 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-save 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:10:28.146 00:10:28.146 real 0m8.719s 00:10:28.146 user 0m9.102s 00:10:28.146 sys 0m5.437s 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:28.146 ************************************ 00:10:28.146 END TEST nvmf_target_discovery 00:10:28.146 ************************************ 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.146 11:33:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.405 ************************************ 00:10:28.405 START TEST nvmf_referrals 00:10:28.405 ************************************ 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:10:28.405 * Looking for test storage... 00:10:28.405 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.405 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.406 --rc genhtml_branch_coverage=1 00:10:28.406 --rc genhtml_function_coverage=1 00:10:28.406 --rc genhtml_legend=1 00:10:28.406 --rc geninfo_all_blocks=1 00:10:28.406 --rc geninfo_unexecuted_blocks=1 00:10:28.406 00:10:28.406 ' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.406 --rc genhtml_branch_coverage=1 00:10:28.406 --rc genhtml_function_coverage=1 00:10:28.406 --rc genhtml_legend=1 00:10:28.406 --rc geninfo_all_blocks=1 00:10:28.406 --rc geninfo_unexecuted_blocks=1 00:10:28.406 00:10:28.406 ' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.406 --rc genhtml_branch_coverage=1 00:10:28.406 --rc genhtml_function_coverage=1 00:10:28.406 --rc genhtml_legend=1 00:10:28.406 --rc geninfo_all_blocks=1 00:10:28.406 --rc geninfo_unexecuted_blocks=1 00:10:28.406 00:10:28.406 ' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.406 --rc genhtml_branch_coverage=1 00:10:28.406 --rc genhtml_function_coverage=1 00:10:28.406 --rc genhtml_legend=1 00:10:28.406 --rc geninfo_all_blocks=1 00:10:28.406 --rc geninfo_unexecuted_blocks=1 00:10:28.406 00:10:28.406 ' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:28.406 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:28.666 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:10:28.666 11:33:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:35.232 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:35.232 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:35.232 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:35.233 Found net devices under 0000:18:00.0: mlx_0_0 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:35.233 Found net devices under 0000:18:00.1: mlx_0_1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # get_rdma_if_list 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # rdma_devs=() 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@89 -- # continue 2 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@89 -- # continue 2 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@61 -- # uname 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_cm 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_core 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_umad 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe iw_cm 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # key_initiator=target1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:35.233 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:10:35.234 10.0.0.1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:10:35.234 10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:35.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:10:35.234 00:10:35.234 --- 10.0.0.2 ping statistics --- 00:10:35.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.234 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:35.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:10:35.234 00:10:35.234 --- 10.0.0.2 ping statistics --- 00:10:35.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.234 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.234 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.235 11:33:37 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=1574021 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 1574021 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1574021 ']' 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.235 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.235 [2024-11-20 11:33:38.092417] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:35.235 [2024-11-20 11:33:38.092470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.235 [2024-11-20 11:33:38.167874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.235 [2024-11-20 11:33:38.217053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.235 [2024-11-20 11:33:38.217096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.235 [2024-11-20 11:33:38.217105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.235 [2024-11-20 11:33:38.217114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.236 [2024-11-20 11:33:38.217121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.236 [2024-11-20 11:33:38.218572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.236 [2024-11-20 11:33:38.218664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.236 [2024-11-20 11:33:38.218756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.236 [2024-11-20 11:33:38.218757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-11-20 11:33:38.400178] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22eb220/0x22ef710) succeed. 00:10:35.236 [2024-11-20 11:33:38.409633] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22ec8b0/0x2330db0) succeed. 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 10.0.0.2 -s 8009 discovery 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-11-20 11:33:38.551427] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 8009 *** 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:35.236 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.495 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.753 11:33:38 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:35.753 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:36.010 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:36.267 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 8009 -o json 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:10:36.526 rmmod nvme_rdma 00:10:36.526 rmmod nvme_fabrics 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 1574021 ']' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 1574021 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1574021 ']' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1574021 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.526 11:33:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574021 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574021' 00:10:36.799 killing process with pid 1574021 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1574021 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1574021 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@264 -- # local dev 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # return 0 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:10:36.799 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@284 -- # iptr 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-save 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-restore 00:10:37.144 00:10:37.144 real 0m8.625s 00:10:37.144 user 0m10.632s 00:10:37.144 sys 0m5.560s 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.144 ************************************ 00:10:37.144 END TEST nvmf_referrals 00:10:37.144 ************************************ 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.144 ************************************ 00:10:37.144 START TEST nvmf_connect_disconnect 00:10:37.144 ************************************ 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:10:37.144 * Looking for test storage... 00:10:37.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.144 --rc genhtml_branch_coverage=1 00:10:37.144 --rc genhtml_function_coverage=1 00:10:37.144 --rc genhtml_legend=1 00:10:37.144 --rc geninfo_all_blocks=1 00:10:37.144 --rc geninfo_unexecuted_blocks=1 00:10:37.144 00:10:37.144 ' 00:10:37.144 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.144 --rc genhtml_branch_coverage=1 00:10:37.145 --rc genhtml_function_coverage=1 00:10:37.145 --rc genhtml_legend=1 00:10:37.145 --rc geninfo_all_blocks=1 00:10:37.145 --rc geninfo_unexecuted_blocks=1 00:10:37.145 00:10:37.145 ' 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.145 --rc genhtml_branch_coverage=1 00:10:37.145 --rc genhtml_function_coverage=1 00:10:37.145 --rc genhtml_legend=1 00:10:37.145 --rc geninfo_all_blocks=1 00:10:37.145 --rc geninfo_unexecuted_blocks=1 00:10:37.145 00:10:37.145 ' 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.145 --rc genhtml_branch_coverage=1 00:10:37.145 --rc genhtml_function_coverage=1 00:10:37.145 --rc genhtml_legend=1 00:10:37.145 --rc geninfo_all_blocks=1 00:10:37.145 --rc geninfo_unexecuted_blocks=1 00:10:37.145 00:10:37.145 ' 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.145 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:37.404 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:10:37.404 11:33:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:43.972 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:43.972 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:10:43.972 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:43.973 Found net devices under 0000:18:00.0: mlx_0_0 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:43.973 Found net devices under 0000:18:00.1: mlx_0_1 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # get_rdma_if_list 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # rdma_devs=() 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@89 -- # continue 2 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@89 -- # continue 2 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@61 -- # uname 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_cm 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_core 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_umad 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe iw_cm 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # key_initiator=target1 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:43.973 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:10:43.974 10.0.0.1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:43.974 10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:43.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:10:43.974 00:10:43.974 --- 10.0.0.2 ping statistics --- 00:10:43.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.974 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:43.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:10:43.974 00:10:43.974 --- 10.0.0.2 ping statistics --- 00:10:43.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.974 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:43.974 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=1577300 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 1577300 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1577300 ']' 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.975 11:33:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.975 [2024-11-20 11:33:46.813495] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:43.975 [2024-11-20 11:33:46.813548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.975 [2024-11-20 11:33:46.891517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.975 [2024-11-20 11:33:46.939867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.975 [2024-11-20 11:33:46.939905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.975 [2024-11-20 11:33:46.939914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.975 [2024-11-20 11:33:46.939923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.975 [2024-11-20 11:33:46.939930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.975 [2024-11-20 11:33:46.941316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.975 [2024-11-20 11:33:46.941338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.976 [2024-11-20 11:33:46.941360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.976 [2024-11-20 11:33:46.941362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 [2024-11-20 11:33:47.094945] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:43.976 [2024-11-20 11:33:47.115147] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c5e220/0x1c62710) succeed. 00:10:43.976 [2024-11-20 11:33:47.124237] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c5f8b0/0x1ca3db0) succeed. 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:43.976 [2024-11-20 11:33:47.276333] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:43.976 11:33:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:48.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:11:04.007 rmmod nvme_rdma 00:11:04.007 rmmod nvme_fabrics 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 1577300 ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 1577300 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1577300 ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1577300 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1577300 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1577300' 00:11:04.007 killing process with pid 1577300 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1577300 00:11:04.007 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1577300 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@264 -- # local dev 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # return 0 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@284 -- # iptr 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:11:04.268 00:11:04.268 real 0m27.258s 00:11:04.268 user 1m23.932s 00:11:04.268 sys 0m5.784s 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 ************************************ 00:11:04.268 END TEST nvmf_connect_disconnect 00:11:04.268 ************************************ 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:04.268 ************************************ 00:11:04.268 START TEST nvmf_multitarget 00:11:04.268 ************************************ 00:11:04.268 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:04.529 * Looking for test storage... 00:11:04.529 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.529 --rc genhtml_branch_coverage=1 00:11:04.529 --rc genhtml_function_coverage=1 00:11:04.529 --rc genhtml_legend=1 00:11:04.529 --rc geninfo_all_blocks=1 00:11:04.529 --rc geninfo_unexecuted_blocks=1 00:11:04.529 00:11:04.529 ' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.529 --rc genhtml_branch_coverage=1 00:11:04.529 --rc genhtml_function_coverage=1 00:11:04.529 --rc genhtml_legend=1 00:11:04.529 --rc geninfo_all_blocks=1 00:11:04.529 --rc geninfo_unexecuted_blocks=1 00:11:04.529 00:11:04.529 ' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.529 --rc genhtml_branch_coverage=1 00:11:04.529 --rc genhtml_function_coverage=1 00:11:04.529 --rc genhtml_legend=1 00:11:04.529 --rc geninfo_all_blocks=1 00:11:04.529 --rc geninfo_unexecuted_blocks=1 00:11:04.529 00:11:04.529 ' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.529 --rc genhtml_branch_coverage=1 00:11:04.529 --rc genhtml_function_coverage=1 00:11:04.529 --rc genhtml_legend=1 00:11:04.529 --rc geninfo_all_blocks=1 00:11:04.529 --rc geninfo_unexecuted_blocks=1 00:11:04.529 00:11:04.529 ' 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:04.529 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:04.530 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:11:04.530 11:34:07 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.096 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:11.097 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:11.097 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:11.097 Found net devices under 0000:18:00.0: mlx_0_0 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:11.097 Found net devices under 0000:18:00.1: mlx_0_1 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # get_rdma_if_list 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # rdma_devs=() 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@89 -- # continue 2 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@89 -- # continue 2 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@61 -- # uname 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_cm 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_core 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_umad 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe iw_cm 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:11.097 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # key_initiator=target1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:11:11.098 10.0.0.1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:11:11.098 10.0.0.2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:11.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:11:11.098 00:11:11.098 --- 10.0.0.2 ping statistics --- 00:11:11.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.098 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:11.098 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:11.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:11:11.099 00:11:11.099 --- 10.0.0.2 ping statistics --- 00:11:11.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.099 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:11.099 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=1582878 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 1582878 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1582878 ']' 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.100 11:34:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 [2024-11-20 11:34:13.829621] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:11.100 [2024-11-20 11:34:13.829690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.100 [2024-11-20 11:34:13.910221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.100 [2024-11-20 11:34:13.958711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.100 [2024-11-20 11:34:13.958759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.100 [2024-11-20 11:34:13.958770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.100 [2024-11-20 11:34:13.958779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.100 [2024-11-20 11:34:13.958786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.100 [2024-11-20 11:34:13.960180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.100 [2024-11-20 11:34:13.960267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.100 [2024-11-20 11:34:13.960366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.100 [2024-11-20 11:34:13.960369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:11.100 "nvmf_tgt_1" 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:11.100 "nvmf_tgt_2" 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:11.100 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:11.359 true 00:11:11.359 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:11.359 true 00:11:11.359 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.359 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:11:11.618 rmmod nvme_rdma 00:11:11.618 rmmod nvme_fabrics 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 1582878 ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 1582878 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1582878 ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1582878 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1582878 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1582878' 00:11:11.618 killing process with pid 1582878 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1582878 00:11:11.618 11:34:14 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1582878 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@264 -- # local dev 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # return 0 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@284 -- # iptr 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-save 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-restore 00:11:11.878 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:11.878 00:11:11.878 real 0m7.480s 00:11:11.878 user 0m7.245s 00:11:11.878 sys 0m4.955s 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:11.879 ************************************ 00:11:11.879 END TEST nvmf_multitarget 00:11:11.879 ************************************ 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.879 ************************************ 00:11:11.879 START TEST nvmf_rpc 00:11:11.879 ************************************ 00:11:11.879 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:12.139 * Looking for test storage... 00:11:12.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.139 --rc genhtml_branch_coverage=1 00:11:12.139 --rc genhtml_function_coverage=1 00:11:12.139 --rc genhtml_legend=1 00:11:12.139 --rc geninfo_all_blocks=1 00:11:12.139 --rc geninfo_unexecuted_blocks=1 00:11:12.139 00:11:12.139 ' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.139 --rc genhtml_branch_coverage=1 00:11:12.139 --rc genhtml_function_coverage=1 00:11:12.139 --rc genhtml_legend=1 00:11:12.139 --rc geninfo_all_blocks=1 00:11:12.139 --rc geninfo_unexecuted_blocks=1 00:11:12.139 00:11:12.139 ' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.139 --rc genhtml_branch_coverage=1 00:11:12.139 --rc genhtml_function_coverage=1 00:11:12.139 --rc genhtml_legend=1 00:11:12.139 --rc geninfo_all_blocks=1 00:11:12.139 --rc geninfo_unexecuted_blocks=1 00:11:12.139 00:11:12.139 ' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.139 --rc genhtml_branch_coverage=1 00:11:12.139 --rc genhtml_function_coverage=1 00:11:12.139 --rc genhtml_legend=1 00:11:12.139 --rc geninfo_all_blocks=1 00:11:12.139 --rc geninfo_unexecuted_blocks=1 00:11:12.139 00:11:12.139 ' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.139 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:12.140 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:11:12.140 11:34:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:18.708 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:18.708 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:18.708 Found net devices under 0000:18:00.0: mlx_0_0 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:18.708 Found net devices under 0000:18:00.1: mlx_0_1 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # get_rdma_if_list 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # rdma_devs=() 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:11:18.708 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@89 -- # continue 2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@89 -- # continue 2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@61 -- # uname 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_cm 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_core 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_umad 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe iw_cm 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # key_initiator=target1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:11:18.709 10.0.0.1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:11:18.709 10.0.0.2 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:18.709 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:18.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:11:18.710 00:11:18.710 --- 10.0.0.2 ping statistics --- 00:11:18.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.710 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:18.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.018 ms 00:11:18.710 00:11:18.710 --- 10.0.0.2 ping statistics --- 00:11:18.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.710 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:11:18.710 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=1586018 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 1586018 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1586018 ']' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.711 [2024-11-20 11:34:21.463961] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:18.711 [2024-11-20 11:34:21.464015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.711 [2024-11-20 11:34:21.540324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.711 [2024-11-20 11:34:21.588082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.711 [2024-11-20 11:34:21.588125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.711 [2024-11-20 11:34:21.588134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.711 [2024-11-20 11:34:21.588146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.711 [2024-11-20 11:34:21.588153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.711 [2024-11-20 11:34:21.589605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.711 [2024-11-20 11:34:21.589691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.711 [2024-11-20 11:34:21.589772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.711 [2024-11-20 11:34:21.589774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:18.711 "tick_rate": 2300000000, 00:11:18.711 "poll_groups": [ 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_000", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [] 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_001", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [] 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_002", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [] 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_003", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [] 00:11:18.711 } 00:11:18.711 ] 00:11:18.711 }' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.711 11:34:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 [2024-11-20 11:34:21.887571] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1721280/0x1725770) succeed. 00:11:18.711 [2024-11-20 11:34:21.896664] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1722910/0x1766e10) succeed. 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.711 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:18.711 "tick_rate": 2300000000, 00:11:18.711 "poll_groups": [ 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_000", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [ 00:11:18.711 { 00:11:18.711 "trtype": "RDMA", 00:11:18.711 "pending_data_buffer": 0, 00:11:18.711 "devices": [ 00:11:18.711 { 00:11:18.711 "name": "mlx5_0", 00:11:18.711 "polls": 15986, 00:11:18.711 "idle_polls": 15986, 00:11:18.711 "completions": 0, 00:11:18.711 "requests": 0, 00:11:18.711 "request_latency": 0, 00:11:18.711 "pending_free_request": 0, 00:11:18.711 "pending_rdma_read": 0, 00:11:18.711 "pending_rdma_write": 0, 00:11:18.711 "pending_rdma_send": 0, 00:11:18.711 "total_send_wrs": 0, 00:11:18.711 "send_doorbell_updates": 0, 00:11:18.711 "total_recv_wrs": 4096, 00:11:18.711 "recv_doorbell_updates": 1 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "mlx5_1", 00:11:18.711 "polls": 15986, 00:11:18.711 "idle_polls": 15986, 00:11:18.711 "completions": 0, 00:11:18.711 "requests": 0, 00:11:18.711 "request_latency": 0, 00:11:18.711 "pending_free_request": 0, 00:11:18.711 "pending_rdma_read": 0, 00:11:18.711 "pending_rdma_write": 0, 00:11:18.711 "pending_rdma_send": 0, 00:11:18.711 "total_send_wrs": 0, 00:11:18.711 "send_doorbell_updates": 0, 00:11:18.711 "total_recv_wrs": 4096, 00:11:18.711 "recv_doorbell_updates": 1 00:11:18.711 } 00:11:18.711 ] 00:11:18.711 } 00:11:18.711 ] 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "nvmf_tgt_poll_group_001", 00:11:18.711 "admin_qpairs": 0, 00:11:18.711 "io_qpairs": 0, 00:11:18.711 "current_admin_qpairs": 0, 00:11:18.711 "current_io_qpairs": 0, 00:11:18.711 "pending_bdev_io": 0, 00:11:18.711 "completed_nvme_io": 0, 00:11:18.711 "transports": [ 00:11:18.711 { 00:11:18.711 "trtype": "RDMA", 00:11:18.711 "pending_data_buffer": 0, 00:11:18.711 "devices": [ 00:11:18.711 { 00:11:18.711 "name": "mlx5_0", 00:11:18.711 "polls": 10536, 00:11:18.711 "idle_polls": 10536, 00:11:18.711 "completions": 0, 00:11:18.711 "requests": 0, 00:11:18.711 "request_latency": 0, 00:11:18.711 "pending_free_request": 0, 00:11:18.711 "pending_rdma_read": 0, 00:11:18.711 "pending_rdma_write": 0, 00:11:18.711 "pending_rdma_send": 0, 00:11:18.711 "total_send_wrs": 0, 00:11:18.711 "send_doorbell_updates": 0, 00:11:18.711 "total_recv_wrs": 4096, 00:11:18.711 "recv_doorbell_updates": 1 00:11:18.711 }, 00:11:18.711 { 00:11:18.711 "name": "mlx5_1", 00:11:18.711 "polls": 10536, 00:11:18.711 "idle_polls": 10536, 00:11:18.711 "completions": 0, 00:11:18.711 "requests": 0, 00:11:18.711 "request_latency": 0, 00:11:18.711 "pending_free_request": 0, 00:11:18.711 "pending_rdma_read": 0, 00:11:18.711 "pending_rdma_write": 0, 00:11:18.711 "pending_rdma_send": 0, 00:11:18.711 "total_send_wrs": 0, 00:11:18.712 "send_doorbell_updates": 0, 00:11:18.712 "total_recv_wrs": 4096, 00:11:18.712 "recv_doorbell_updates": 1 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 }, 00:11:18.712 { 00:11:18.712 "name": "nvmf_tgt_poll_group_002", 00:11:18.712 "admin_qpairs": 0, 00:11:18.712 "io_qpairs": 0, 00:11:18.712 "current_admin_qpairs": 0, 00:11:18.712 "current_io_qpairs": 0, 00:11:18.712 "pending_bdev_io": 0, 00:11:18.712 "completed_nvme_io": 0, 00:11:18.712 "transports": [ 00:11:18.712 { 00:11:18.712 "trtype": "RDMA", 00:11:18.712 "pending_data_buffer": 0, 00:11:18.712 "devices": [ 00:11:18.712 { 00:11:18.712 "name": "mlx5_0", 00:11:18.712 "polls": 5549, 00:11:18.712 "idle_polls": 5549, 00:11:18.712 "completions": 0, 00:11:18.712 "requests": 0, 00:11:18.712 "request_latency": 0, 00:11:18.712 "pending_free_request": 0, 00:11:18.712 "pending_rdma_read": 0, 00:11:18.712 "pending_rdma_write": 0, 00:11:18.712 "pending_rdma_send": 0, 00:11:18.712 "total_send_wrs": 0, 00:11:18.712 "send_doorbell_updates": 0, 00:11:18.712 "total_recv_wrs": 4096, 00:11:18.712 "recv_doorbell_updates": 1 00:11:18.712 }, 00:11:18.712 { 00:11:18.712 "name": "mlx5_1", 00:11:18.712 "polls": 5549, 00:11:18.712 "idle_polls": 5549, 00:11:18.712 "completions": 0, 00:11:18.712 "requests": 0, 00:11:18.712 "request_latency": 0, 00:11:18.712 "pending_free_request": 0, 00:11:18.712 "pending_rdma_read": 0, 00:11:18.712 "pending_rdma_write": 0, 00:11:18.712 "pending_rdma_send": 0, 00:11:18.712 "total_send_wrs": 0, 00:11:18.712 "send_doorbell_updates": 0, 00:11:18.712 "total_recv_wrs": 4096, 00:11:18.712 "recv_doorbell_updates": 1 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 }, 00:11:18.712 { 00:11:18.712 "name": "nvmf_tgt_poll_group_003", 00:11:18.712 "admin_qpairs": 0, 00:11:18.712 "io_qpairs": 0, 00:11:18.712 "current_admin_qpairs": 0, 00:11:18.712 "current_io_qpairs": 0, 00:11:18.712 "pending_bdev_io": 0, 00:11:18.712 "completed_nvme_io": 0, 00:11:18.712 "transports": [ 00:11:18.712 { 00:11:18.712 "trtype": "RDMA", 00:11:18.712 "pending_data_buffer": 0, 00:11:18.712 "devices": [ 00:11:18.712 { 00:11:18.712 "name": "mlx5_0", 00:11:18.712 "polls": 888, 00:11:18.712 "idle_polls": 888, 00:11:18.712 "completions": 0, 00:11:18.712 "requests": 0, 00:11:18.712 "request_latency": 0, 00:11:18.712 "pending_free_request": 0, 00:11:18.712 "pending_rdma_read": 0, 00:11:18.712 "pending_rdma_write": 0, 00:11:18.712 "pending_rdma_send": 0, 00:11:18.712 "total_send_wrs": 0, 00:11:18.712 "send_doorbell_updates": 0, 00:11:18.712 "total_recv_wrs": 4096, 00:11:18.712 "recv_doorbell_updates": 1 00:11:18.712 }, 00:11:18.712 { 00:11:18.712 "name": "mlx5_1", 00:11:18.712 "polls": 888, 00:11:18.712 "idle_polls": 888, 00:11:18.712 "completions": 0, 00:11:18.712 "requests": 0, 00:11:18.712 "request_latency": 0, 00:11:18.712 "pending_free_request": 0, 00:11:18.712 "pending_rdma_read": 0, 00:11:18.712 "pending_rdma_write": 0, 00:11:18.712 "pending_rdma_send": 0, 00:11:18.712 "total_send_wrs": 0, 00:11:18.712 "send_doorbell_updates": 0, 00:11:18.712 "total_recv_wrs": 4096, 00:11:18.712 "recv_doorbell_updates": 1 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 } 00:11:18.712 ] 00:11:18.712 }' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:11:18.712 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 Malloc1 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.971 [2024-11-20 11:34:22.363869] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:18.971 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:18.971 [2024-11-20 11:34:22.409895] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562' 00:11:19.230 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:19.230 could not add new controller: failed to write to nvme-fabrics device 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.230 11:34:22 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.165 11:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:20.165 11:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:20.165 11:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.165 11:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:20.165 11:34:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:22.066 11:34:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:23.000 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:23.001 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.259 [2024-11-20 11:34:26.501829] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562' 00:11:23.259 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:23.259 could not add new controller: failed to write to nvme-fabrics device 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.259 11:34:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.194 11:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.194 11:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:24.194 11:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.194 11:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:24.194 11:34:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:26.096 11:34:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.468 [2024-11-20 11:34:30.592258] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.468 11:34:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.440 11:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.440 11:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.440 11:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.440 11:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.440 11:34:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:30.395 11:34:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 [2024-11-20 11:34:34.648575] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.330 11:34:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.264 11:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.264 11:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.264 11:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.264 11:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.264 11:34:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:34.791 11:34:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 [2024-11-20 11:34:38.707292] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.356 11:34:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.291 11:34:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.291 11:34:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:36.291 11:34:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.291 11:34:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:36.291 11:34:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:38.821 11:34:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 [2024-11-20 11:34:42.730478] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.388 11:34:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.323 11:34:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.323 11:34:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.323 11:34:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.323 11:34:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.323 11:34:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.854 11:34:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 [2024-11-20 11:34:46.808837] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.420 11:34:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.355 11:34:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.355 11:34:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.355 11:34:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.355 11:34:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.355 11:34:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.889 11:34:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 [2024-11-20 11:34:50.889251] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.456 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 [2024-11-20 11:34:50.938003] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 [2024-11-20 11:34:50.986185] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 [2024-11-20 11:34:51.034383] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.716 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 [2024-11-20 11:34:51.082542] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.717 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:47.717 "tick_rate": 2300000000, 00:11:47.717 "poll_groups": [ 00:11:47.717 { 00:11:47.717 "name": "nvmf_tgt_poll_group_000", 00:11:47.717 "admin_qpairs": 2, 00:11:47.717 "io_qpairs": 27, 00:11:47.717 "current_admin_qpairs": 0, 00:11:47.717 "current_io_qpairs": 0, 00:11:47.717 "pending_bdev_io": 0, 00:11:47.717 "completed_nvme_io": 127, 00:11:47.717 "transports": [ 00:11:47.717 { 00:11:47.717 "trtype": "RDMA", 00:11:47.717 "pending_data_buffer": 0, 00:11:47.717 "devices": [ 00:11:47.717 { 00:11:47.717 "name": "mlx5_0", 00:11:47.717 "polls": 3558706, 00:11:47.717 "idle_polls": 3558706, 00:11:47.717 "completions": 0, 00:11:47.717 "requests": 0, 00:11:47.717 "request_latency": 0, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 0, 00:11:47.717 "send_doorbell_updates": 0, 00:11:47.717 "total_recv_wrs": 4096, 00:11:47.717 "recv_doorbell_updates": 1 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "mlx5_1", 00:11:47.717 "polls": 3558706, 00:11:47.717 "idle_polls": 3558378, 00:11:47.717 "completions": 367, 00:11:47.717 "requests": 183, 00:11:47.717 "request_latency": 34142326, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 309, 00:11:47.717 "send_doorbell_updates": 161, 00:11:47.717 "total_recv_wrs": 4279, 00:11:47.717 "recv_doorbell_updates": 161 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "nvmf_tgt_poll_group_001", 00:11:47.717 "admin_qpairs": 2, 00:11:47.717 "io_qpairs": 26, 00:11:47.717 "current_admin_qpairs": 0, 00:11:47.717 "current_io_qpairs": 0, 00:11:47.717 "pending_bdev_io": 0, 00:11:47.717 "completed_nvme_io": 119, 00:11:47.717 "transports": [ 00:11:47.717 { 00:11:47.717 "trtype": "RDMA", 00:11:47.717 "pending_data_buffer": 0, 00:11:47.717 "devices": [ 00:11:47.717 { 00:11:47.717 "name": "mlx5_0", 00:11:47.717 "polls": 3591848, 00:11:47.717 "idle_polls": 3591848, 00:11:47.717 "completions": 0, 00:11:47.717 "requests": 0, 00:11:47.717 "request_latency": 0, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 0, 00:11:47.717 "send_doorbell_updates": 0, 00:11:47.717 "total_recv_wrs": 4096, 00:11:47.717 "recv_doorbell_updates": 1 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "mlx5_1", 00:11:47.717 "polls": 3591848, 00:11:47.717 "idle_polls": 3591541, 00:11:47.717 "completions": 348, 00:11:47.717 "requests": 174, 00:11:47.717 "request_latency": 34605508, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 292, 00:11:47.717 "send_doorbell_updates": 150, 00:11:47.717 "total_recv_wrs": 4270, 00:11:47.717 "recv_doorbell_updates": 151 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "nvmf_tgt_poll_group_002", 00:11:47.717 "admin_qpairs": 1, 00:11:47.717 "io_qpairs": 26, 00:11:47.717 "current_admin_qpairs": 0, 00:11:47.717 "current_io_qpairs": 0, 00:11:47.717 "pending_bdev_io": 0, 00:11:47.717 "completed_nvme_io": 126, 00:11:47.717 "transports": [ 00:11:47.717 { 00:11:47.717 "trtype": "RDMA", 00:11:47.717 "pending_data_buffer": 0, 00:11:47.717 "devices": [ 00:11:47.717 { 00:11:47.717 "name": "mlx5_0", 00:11:47.717 "polls": 3528139, 00:11:47.717 "idle_polls": 3528139, 00:11:47.717 "completions": 0, 00:11:47.717 "requests": 0, 00:11:47.717 "request_latency": 0, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 0, 00:11:47.717 "send_doorbell_updates": 0, 00:11:47.717 "total_recv_wrs": 4096, 00:11:47.717 "recv_doorbell_updates": 1 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "mlx5_1", 00:11:47.717 "polls": 3528139, 00:11:47.717 "idle_polls": 3527871, 00:11:47.717 "completions": 309, 00:11:47.717 "requests": 154, 00:11:47.717 "request_latency": 32499590, 00:11:47.717 "pending_free_request": 0, 00:11:47.717 "pending_rdma_read": 0, 00:11:47.717 "pending_rdma_write": 0, 00:11:47.717 "pending_rdma_send": 0, 00:11:47.717 "total_send_wrs": 268, 00:11:47.717 "send_doorbell_updates": 131, 00:11:47.717 "total_recv_wrs": 4250, 00:11:47.717 "recv_doorbell_updates": 131 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 } 00:11:47.717 ] 00:11:47.717 }, 00:11:47.717 { 00:11:47.717 "name": "nvmf_tgt_poll_group_003", 00:11:47.717 "admin_qpairs": 2, 00:11:47.717 "io_qpairs": 26, 00:11:47.718 "current_admin_qpairs": 0, 00:11:47.718 "current_io_qpairs": 0, 00:11:47.718 "pending_bdev_io": 0, 00:11:47.718 "completed_nvme_io": 83, 00:11:47.718 "transports": [ 00:11:47.718 { 00:11:47.718 "trtype": "RDMA", 00:11:47.718 "pending_data_buffer": 0, 00:11:47.718 "devices": [ 00:11:47.718 { 00:11:47.718 "name": "mlx5_0", 00:11:47.718 "polls": 2784440, 00:11:47.718 "idle_polls": 2784440, 00:11:47.718 "completions": 0, 00:11:47.718 "requests": 0, 00:11:47.718 "request_latency": 0, 00:11:47.718 "pending_free_request": 0, 00:11:47.718 "pending_rdma_read": 0, 00:11:47.718 "pending_rdma_write": 0, 00:11:47.718 "pending_rdma_send": 0, 00:11:47.718 "total_send_wrs": 0, 00:11:47.718 "send_doorbell_updates": 0, 00:11:47.718 "total_recv_wrs": 4096, 00:11:47.718 "recv_doorbell_updates": 1 00:11:47.718 }, 00:11:47.718 { 00:11:47.718 "name": "mlx5_1", 00:11:47.718 "polls": 2784440, 00:11:47.718 "idle_polls": 2784187, 00:11:47.718 "completions": 276, 00:11:47.718 "requests": 138, 00:11:47.718 "request_latency": 23671348, 00:11:47.718 "pending_free_request": 0, 00:11:47.718 "pending_rdma_read": 0, 00:11:47.718 "pending_rdma_write": 0, 00:11:47.718 "pending_rdma_send": 0, 00:11:47.718 "total_send_wrs": 220, 00:11:47.718 "send_doorbell_updates": 126, 00:11:47.718 "total_recv_wrs": 4234, 00:11:47.718 "recv_doorbell_updates": 127 00:11:47.718 } 00:11:47.718 ] 00:11:47.718 } 00:11:47.718 ] 00:11:47.718 } 00:11:47.718 ] 00:11:47.718 }' 00:11:47.718 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:47.718 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:47.718 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:47.718 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1300 > 0 )) 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 124918772 > 0 )) 00:11:47.976 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:11:47.977 rmmod nvme_rdma 00:11:47.977 rmmod nvme_fabrics 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 1586018 ']' 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 1586018 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1586018 ']' 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1586018 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.977 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1586018 00:11:48.237 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.237 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.238 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1586018' 00:11:48.238 killing process with pid 1586018 00:11:48.238 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1586018 00:11:48.238 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1586018 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@264 -- # local dev 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # return 0 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@284 -- # iptr 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-save 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-restore 00:11:48.497 00:11:48.497 real 0m36.529s 00:11:48.497 user 2m2.229s 00:11:48.497 sys 0m6.302s 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.497 ************************************ 00:11:48.497 END TEST nvmf_rpc 00:11:48.497 ************************************ 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.497 ************************************ 00:11:48.497 START TEST nvmf_invalid 00:11:48.497 ************************************ 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:48.497 * Looking for test storage... 00:11:48.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.497 11:34:51 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.758 --rc genhtml_branch_coverage=1 00:11:48.758 --rc genhtml_function_coverage=1 00:11:48.758 --rc genhtml_legend=1 00:11:48.758 --rc geninfo_all_blocks=1 00:11:48.758 --rc geninfo_unexecuted_blocks=1 00:11:48.758 00:11:48.758 ' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.758 --rc genhtml_branch_coverage=1 00:11:48.758 --rc genhtml_function_coverage=1 00:11:48.758 --rc genhtml_legend=1 00:11:48.758 --rc geninfo_all_blocks=1 00:11:48.758 --rc geninfo_unexecuted_blocks=1 00:11:48.758 00:11:48.758 ' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.758 --rc genhtml_branch_coverage=1 00:11:48.758 --rc genhtml_function_coverage=1 00:11:48.758 --rc genhtml_legend=1 00:11:48.758 --rc geninfo_all_blocks=1 00:11:48.758 --rc geninfo_unexecuted_blocks=1 00:11:48.758 00:11:48.758 ' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.758 --rc genhtml_branch_coverage=1 00:11:48.758 --rc genhtml_function_coverage=1 00:11:48.758 --rc genhtml_legend=1 00:11:48.758 --rc geninfo_all_blocks=1 00:11:48.758 --rc geninfo_unexecuted_blocks=1 00:11:48.758 00:11:48.758 ' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:11:48.758 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:48.759 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:11:48.759 11:34:52 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:11:55.330 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:55.331 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:55.331 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:55.331 Found net devices under 0000:18:00.0: mlx_0_0 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:55.331 Found net devices under 0000:18:00.1: mlx_0_1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # get_rdma_if_list 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # rdma_devs=() 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@89 -- # continue 2 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@89 -- # continue 2 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@61 -- # uname 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_cm 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_core 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_umad 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe iw_cm 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # key_initiator=target1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:11:55.331 10.0.0.1 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:11:55.331 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:11:55.332 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:55.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:11:55.332 00:11:55.332 --- 10.0.0.2 ping statistics --- 00:11:55.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.332 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:55.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:11:55.332 00:11:55.332 --- 10.0.0.2 ping statistics --- 00:11:55.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.332 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:55.332 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=1592954 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 1592954 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1592954 ']' 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:55.333 [2024-11-20 11:34:58.424923] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:55.333 [2024-11-20 11:34:58.424985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.333 [2024-11-20 11:34:58.498775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.333 [2024-11-20 11:34:58.542727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.333 [2024-11-20 11:34:58.542770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.333 [2024-11-20 11:34:58.542780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.333 [2024-11-20 11:34:58.542789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.333 [2024-11-20 11:34:58.542796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.333 [2024-11-20 11:34:58.543994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.333 [2024-11-20 11:34:58.544086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.333 [2024-11-20 11:34:58.544111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.333 [2024-11-20 11:34:58.544113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:55.333 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22903 00:11:55.593 [2024-11-20 11:34:58.879598] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:55.593 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:55.593 { 00:11:55.593 "nqn": "nqn.2016-06.io.spdk:cnode22903", 00:11:55.593 "tgt_name": "foobar", 00:11:55.593 "method": "nvmf_create_subsystem", 00:11:55.593 "req_id": 1 00:11:55.593 } 00:11:55.593 Got JSON-RPC error response 00:11:55.593 response: 00:11:55.593 { 00:11:55.593 "code": -32603, 00:11:55.593 "message": "Unable to find target foobar" 00:11:55.593 }' 00:11:55.593 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:55.593 { 00:11:55.593 "nqn": "nqn.2016-06.io.spdk:cnode22903", 00:11:55.593 "tgt_name": "foobar", 00:11:55.593 "method": "nvmf_create_subsystem", 00:11:55.593 "req_id": 1 00:11:55.593 } 00:11:55.593 Got JSON-RPC error response 00:11:55.593 response: 00:11:55.593 { 00:11:55.593 "code": -32603, 00:11:55.593 "message": "Unable to find target foobar" 00:11:55.593 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:55.593 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:55.593 11:34:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7902 00:11:55.852 [2024-11-20 11:34:59.084282] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7902: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:55.852 { 00:11:55.852 "nqn": "nqn.2016-06.io.spdk:cnode7902", 00:11:55.852 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:55.852 "method": "nvmf_create_subsystem", 00:11:55.852 "req_id": 1 00:11:55.852 } 00:11:55.852 Got JSON-RPC error response 00:11:55.852 response: 00:11:55.852 { 00:11:55.852 "code": -32602, 00:11:55.852 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:55.852 }' 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:55.852 { 00:11:55.852 "nqn": "nqn.2016-06.io.spdk:cnode7902", 00:11:55.852 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:55.852 "method": "nvmf_create_subsystem", 00:11:55.852 "req_id": 1 00:11:55.852 } 00:11:55.852 Got JSON-RPC error response 00:11:55.852 response: 00:11:55.852 { 00:11:55.852 "code": -32602, 00:11:55.852 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:55.852 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14013 00:11:55.852 [2024-11-20 11:34:59.288942] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14013: invalid model number 'SPDK_Controller' 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:55.852 { 00:11:55.852 "nqn": "nqn.2016-06.io.spdk:cnode14013", 00:11:55.852 "model_number": "SPDK_Controller\u001f", 00:11:55.852 "method": "nvmf_create_subsystem", 00:11:55.852 "req_id": 1 00:11:55.852 } 00:11:55.852 Got JSON-RPC error response 00:11:55.852 response: 00:11:55.852 { 00:11:55.852 "code": -32602, 00:11:55.852 "message": "Invalid MN SPDK_Controller\u001f" 00:11:55.852 }' 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:55.852 { 00:11:55.852 "nqn": "nqn.2016-06.io.spdk:cnode14013", 00:11:55.852 "model_number": "SPDK_Controller\u001f", 00:11:55.852 "method": "nvmf_create_subsystem", 00:11:55.852 "req_id": 1 00:11:55.852 } 00:11:55.852 Got JSON-RPC error response 00:11:55.852 response: 00:11:55.852 { 00:11:55.852 "code": -32602, 00:11:55.852 "message": "Invalid MN SPDK_Controller\u001f" 00:11:55.852 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:55.852 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ', x;\4a-6uEM JnQTD{ C' 00:11:56.111 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ', x;\4a-6uEM JnQTD{ C' nqn.2016-06.io.spdk:cnode17640 00:11:56.370 [2024-11-20 11:34:59.654235] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17640: invalid serial number ', x;\4a-6uEM JnQTD{ C' 00:11:56.370 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:56.370 { 00:11:56.370 "nqn": "nqn.2016-06.io.spdk:cnode17640", 00:11:56.370 "serial_number": ", x;\\4a-6uEM JnQTD{ C", 00:11:56.370 "method": "nvmf_create_subsystem", 00:11:56.370 "req_id": 1 00:11:56.370 } 00:11:56.370 Got JSON-RPC error response 00:11:56.370 response: 00:11:56.370 { 00:11:56.370 "code": -32602, 00:11:56.370 "message": "Invalid SN , x;\\4a-6uEM JnQTD{ C" 00:11:56.370 }' 00:11:56.370 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:56.370 { 00:11:56.370 "nqn": "nqn.2016-06.io.spdk:cnode17640", 00:11:56.370 "serial_number": ", x;\\4a-6uEM JnQTD{ C", 00:11:56.370 "method": "nvmf_create_subsystem", 00:11:56.370 "req_id": 1 00:11:56.370 } 00:11:56.370 Got JSON-RPC error response 00:11:56.370 response: 00:11:56.370 { 00:11:56.370 "code": -32602, 00:11:56.370 "message": "Invalid SN , x;\\4a-6uEM JnQTD{ C" 00:11:56.370 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.370 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:56.370 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:56.371 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:56.372 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.631 11:34:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?{}'\''f9@" _%;j4'\''n5[)_7P2QcL'\''zFU[Qde8iqUz)P' 00:11:56.632 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '?{}'\''f9@" _%;j4'\''n5[)_7P2QcL'\''zFU[Qde8iqUz)P' nqn.2016-06.io.spdk:cnode13581 00:11:56.890 [2024-11-20 11:35:00.192012] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13581: invalid model number '?{}'f9@" _%;j4'n5[)_7P2QcL'zFU[Qde8iqUz)P' 00:11:56.890 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:56.890 { 00:11:56.890 "nqn": "nqn.2016-06.io.spdk:cnode13581", 00:11:56.891 "model_number": "?{}'\''f9@\" _%;j4'\''n5[)_7P2QcL'\''zFU[Qde8iqUz)P", 00:11:56.891 "method": "nvmf_create_subsystem", 00:11:56.891 "req_id": 1 00:11:56.891 } 00:11:56.891 Got JSON-RPC error response 00:11:56.891 response: 00:11:56.891 { 00:11:56.891 "code": -32602, 00:11:56.891 "message": "Invalid MN ?{}'\''f9@\" _%;j4'\''n5[)_7P2QcL'\''zFU[Qde8iqUz)P" 00:11:56.891 }' 00:11:56.891 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:56.891 { 00:11:56.891 "nqn": "nqn.2016-06.io.spdk:cnode13581", 00:11:56.891 "model_number": "?{}'f9@\" _%;j4'n5[)_7P2QcL'zFU[Qde8iqUz)P", 00:11:56.891 "method": "nvmf_create_subsystem", 00:11:56.891 "req_id": 1 00:11:56.891 } 00:11:56.891 Got JSON-RPC error response 00:11:56.891 response: 00:11:56.891 { 00:11:56.891 "code": -32602, 00:11:56.891 "message": "Invalid MN ?{}'f9@\" _%;j4'n5[)_7P2QcL'zFU[Qde8iqUz)P" 00:11:56.891 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:56.891 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:11:57.149 [2024-11-20 11:35:00.425462] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14d4b40/0x14d9030) succeed. 00:11:57.149 [2024-11-20 11:35:00.434860] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14d61d0/0x151a6d0) succeed. 00:11:57.149 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:57.407 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 10.0.0.2 -s 4421 00:11:57.666 [2024-11-20 11:35:00.967355] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:57.666 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='request: 00:11:57.666 { 00:11:57.666 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:57.666 "listen_address": { 00:11:57.666 "trtype": "rdma", 00:11:57.666 "traddr": "10.0.0.2", 00:11:57.666 "trsvcid": "4421" 00:11:57.666 }, 00:11:57.666 "method": "nvmf_subsystem_remove_listener", 00:11:57.666 "req_id": 1 00:11:57.666 } 00:11:57.666 Got JSON-RPC error response 00:11:57.666 response: 00:11:57.666 { 00:11:57.666 "code": -32602, 00:11:57.666 "message": "Invalid parameters" 00:11:57.666 }' 00:11:57.666 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ request: 00:11:57.666 { 00:11:57.666 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:57.666 "listen_address": { 00:11:57.666 "trtype": "rdma", 00:11:57.666 "traddr": "10.0.0.2", 00:11:57.666 "trsvcid": "4421" 00:11:57.666 }, 00:11:57.666 "method": "nvmf_subsystem_remove_listener", 00:11:57.666 "req_id": 1 00:11:57.666 } 00:11:57.666 Got JSON-RPC error response 00:11:57.666 response: 00:11:57.666 { 00:11:57.666 "code": -32602, 00:11:57.666 "message": "Invalid parameters" 00:11:57.666 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:57.666 11:35:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9821 -i 0 00:11:57.924 [2024-11-20 11:35:01.176082] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9821: invalid cntlid range [0-65519] 00:11:57.924 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='request: 00:11:57.924 { 00:11:57.924 "nqn": "nqn.2016-06.io.spdk:cnode9821", 00:11:57.924 "min_cntlid": 0, 00:11:57.924 "method": "nvmf_create_subsystem", 00:11:57.924 "req_id": 1 00:11:57.924 } 00:11:57.924 Got JSON-RPC error response 00:11:57.924 response: 00:11:57.924 { 00:11:57.924 "code": -32602, 00:11:57.924 "message": "Invalid cntlid range [0-65519]" 00:11:57.924 }' 00:11:57.924 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ request: 00:11:57.924 { 00:11:57.924 "nqn": "nqn.2016-06.io.spdk:cnode9821", 00:11:57.924 "min_cntlid": 0, 00:11:57.924 "method": "nvmf_create_subsystem", 00:11:57.924 "req_id": 1 00:11:57.924 } 00:11:57.924 Got JSON-RPC error response 00:11:57.924 response: 00:11:57.924 { 00:11:57.924 "code": -32602, 00:11:57.924 "message": "Invalid cntlid range [0-65519]" 00:11:57.924 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:57.924 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2973 -i 65520 00:11:57.925 [2024-11-20 11:35:01.400871] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2973: invalid cntlid range [65520-65519] 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='request: 00:11:58.183 { 00:11:58.183 "nqn": "nqn.2016-06.io.spdk:cnode2973", 00:11:58.183 "min_cntlid": 65520, 00:11:58.183 "method": "nvmf_create_subsystem", 00:11:58.183 "req_id": 1 00:11:58.183 } 00:11:58.183 Got JSON-RPC error response 00:11:58.183 response: 00:11:58.183 { 00:11:58.183 "code": -32602, 00:11:58.183 "message": "Invalid cntlid range [65520-65519]" 00:11:58.183 }' 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ request: 00:11:58.183 { 00:11:58.183 "nqn": "nqn.2016-06.io.spdk:cnode2973", 00:11:58.183 "min_cntlid": 65520, 00:11:58.183 "method": "nvmf_create_subsystem", 00:11:58.183 "req_id": 1 00:11:58.183 } 00:11:58.183 Got JSON-RPC error response 00:11:58.183 response: 00:11:58.183 { 00:11:58.183 "code": -32602, 00:11:58.183 "message": "Invalid cntlid range [65520-65519]" 00:11:58.183 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29936 -I 0 00:11:58.183 [2024-11-20 11:35:01.609617] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29936: invalid cntlid range [1-0] 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='request: 00:11:58.183 { 00:11:58.183 "nqn": "nqn.2016-06.io.spdk:cnode29936", 00:11:58.183 "max_cntlid": 0, 00:11:58.183 "method": "nvmf_create_subsystem", 00:11:58.183 "req_id": 1 00:11:58.183 } 00:11:58.183 Got JSON-RPC error response 00:11:58.183 response: 00:11:58.183 { 00:11:58.183 "code": -32602, 00:11:58.183 "message": "Invalid cntlid range [1-0]" 00:11:58.183 }' 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ request: 00:11:58.183 { 00:11:58.183 "nqn": "nqn.2016-06.io.spdk:cnode29936", 00:11:58.183 "max_cntlid": 0, 00:11:58.183 "method": "nvmf_create_subsystem", 00:11:58.183 "req_id": 1 00:11:58.183 } 00:11:58.183 Got JSON-RPC error response 00:11:58.183 response: 00:11:58.183 { 00:11:58.183 "code": -32602, 00:11:58.183 "message": "Invalid cntlid range [1-0]" 00:11:58.183 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.183 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22602 -I 65520 00:11:58.441 [2024-11-20 11:35:01.822420] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22602: invalid cntlid range [1-65520] 00:11:58.442 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='request: 00:11:58.442 { 00:11:58.442 "nqn": "nqn.2016-06.io.spdk:cnode22602", 00:11:58.442 "max_cntlid": 65520, 00:11:58.442 "method": "nvmf_create_subsystem", 00:11:58.442 "req_id": 1 00:11:58.442 } 00:11:58.442 Got JSON-RPC error response 00:11:58.442 response: 00:11:58.442 { 00:11:58.442 "code": -32602, 00:11:58.442 "message": "Invalid cntlid range [1-65520]" 00:11:58.442 }' 00:11:58.442 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ request: 00:11:58.442 { 00:11:58.442 "nqn": "nqn.2016-06.io.spdk:cnode22602", 00:11:58.442 "max_cntlid": 65520, 00:11:58.442 "method": "nvmf_create_subsystem", 00:11:58.442 "req_id": 1 00:11:58.442 } 00:11:58.442 Got JSON-RPC error response 00:11:58.442 response: 00:11:58.442 { 00:11:58.442 "code": -32602, 00:11:58.442 "message": "Invalid cntlid range [1-65520]" 00:11:58.442 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.442 11:35:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27461 -i 6 -I 5 00:11:58.700 [2024-11-20 11:35:02.015097] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27461: invalid cntlid range [6-5] 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='request: 00:11:58.700 { 00:11:58.700 "nqn": "nqn.2016-06.io.spdk:cnode27461", 00:11:58.700 "min_cntlid": 6, 00:11:58.700 "max_cntlid": 5, 00:11:58.700 "method": "nvmf_create_subsystem", 00:11:58.700 "req_id": 1 00:11:58.700 } 00:11:58.700 Got JSON-RPC error response 00:11:58.700 response: 00:11:58.700 { 00:11:58.700 "code": -32602, 00:11:58.700 "message": "Invalid cntlid range [6-5]" 00:11:58.700 }' 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ request: 00:11:58.700 { 00:11:58.700 "nqn": "nqn.2016-06.io.spdk:cnode27461", 00:11:58.700 "min_cntlid": 6, 00:11:58.700 "max_cntlid": 5, 00:11:58.700 "method": "nvmf_create_subsystem", 00:11:58.700 "req_id": 1 00:11:58.700 } 00:11:58.700 Got JSON-RPC error response 00:11:58.700 response: 00:11:58.700 { 00:11:58.700 "code": -32602, 00:11:58.700 "message": "Invalid cntlid range [6-5]" 00:11:58.700 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:11:58.700 { 00:11:58.700 "name": "foobar", 00:11:58.700 "method": "nvmf_delete_target", 00:11:58.700 "req_id": 1 00:11:58.700 } 00:11:58.700 Got JSON-RPC error response 00:11:58.700 response: 00:11:58.700 { 00:11:58.700 "code": -32602, 00:11:58.700 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:58.700 }' 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:11:58.700 { 00:11:58.700 "name": "foobar", 00:11:58.700 "method": "nvmf_delete_target", 00:11:58.700 "req_id": 1 00:11:58.700 } 00:11:58.700 Got JSON-RPC error response 00:11:58.700 response: 00:11:58.700 { 00:11:58.700 "code": -32602, 00:11:58.700 "message": "The specified target doesn't exist, cannot delete it." 00:11:58.700 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:11:58.700 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:58.701 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:11:58.701 rmmod nvme_rdma 00:11:58.959 rmmod nvme_fabrics 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 1592954 ']' 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 1592954 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1592954 ']' 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1592954 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592954 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592954' 00:11:58.959 killing process with pid 1592954 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1592954 00:11:58.959 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1592954 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@264 -- # local dev 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # return 0 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:11:59.218 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@284 -- # iptr 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-save 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-restore 00:11:59.219 00:11:59.219 real 0m10.694s 00:11:59.219 user 0m20.053s 00:11:59.219 sys 0m6.000s 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.219 ************************************ 00:11:59.219 END TEST nvmf_invalid 00:11:59.219 ************************************ 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.219 ************************************ 00:11:59.219 START TEST nvmf_connect_stress 00:11:59.219 ************************************ 00:11:59.219 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:59.479 * Looking for test storage... 00:11:59.479 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.479 --rc genhtml_branch_coverage=1 00:11:59.479 --rc genhtml_function_coverage=1 00:11:59.479 --rc genhtml_legend=1 00:11:59.479 --rc geninfo_all_blocks=1 00:11:59.479 --rc geninfo_unexecuted_blocks=1 00:11:59.479 00:11:59.479 ' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.479 --rc genhtml_branch_coverage=1 00:11:59.479 --rc genhtml_function_coverage=1 00:11:59.479 --rc genhtml_legend=1 00:11:59.479 --rc geninfo_all_blocks=1 00:11:59.479 --rc geninfo_unexecuted_blocks=1 00:11:59.479 00:11:59.479 ' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.479 --rc genhtml_branch_coverage=1 00:11:59.479 --rc genhtml_function_coverage=1 00:11:59.479 --rc genhtml_legend=1 00:11:59.479 --rc geninfo_all_blocks=1 00:11:59.479 --rc geninfo_unexecuted_blocks=1 00:11:59.479 00:11:59.479 ' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.479 --rc genhtml_branch_coverage=1 00:11:59.479 --rc genhtml_function_coverage=1 00:11:59.479 --rc genhtml_legend=1 00:11:59.479 --rc geninfo_all_blocks=1 00:11:59.479 --rc geninfo_unexecuted_blocks=1 00:11:59.479 00:11:59.479 ' 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:59.479 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:59.480 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:11:59.480 11:35:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:06.045 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:06.045 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:06.046 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:06.046 Found net devices under 0000:18:00.0: mlx_0_0 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:06.046 Found net devices under 0000:18:00.1: mlx_0_1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # get_rdma_if_list 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # rdma_devs=() 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@89 -- # continue 2 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@89 -- # continue 2 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@61 -- # uname 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_cm 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_core 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_umad 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe iw_cm 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # key_initiator=target1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:12:06.046 10.0.0.1 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:06.046 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:12:06.047 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:06.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.026 ms 00:12:06.047 00:12:06.047 --- 10.0.0.2 ping statistics --- 00:12:06.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.047 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:06.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:12:06.047 00:12:06.047 --- 10.0.0.2 ping statistics --- 00:12:06.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.047 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:06.047 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=1596591 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 1596591 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1596591 ']' 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.048 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.048 [2024-11-20 11:35:09.424217] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:06.048 [2024-11-20 11:35:09.424276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.048 [2024-11-20 11:35:09.503711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.307 [2024-11-20 11:35:09.552692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.307 [2024-11-20 11:35:09.552730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.307 [2024-11-20 11:35:09.552740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.307 [2024-11-20 11:35:09.552749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.307 [2024-11-20 11:35:09.552758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.307 [2024-11-20 11:35:09.553940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.307 [2024-11-20 11:35:09.554022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.307 [2024-11-20 11:35:09.554024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.307 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.307 [2024-11-20 11:35:09.732989] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f19f0/0x7f5ee0) succeed. 00:12:06.307 [2024-11-20 11:35:09.741968] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7f2fe0/0x837580) succeed. 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.566 [2024-11-20 11:35:09.865372] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.566 NULL1 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1596772 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.566 11:35:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.134 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.134 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:07.134 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.134 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.134 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.392 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.392 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:07.392 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.392 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.392 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.650 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.650 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:07.650 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.650 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.650 11:35:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.908 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.908 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:07.908 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.908 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.908 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.166 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.166 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:08.166 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.166 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.166 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.732 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.732 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:08.732 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.732 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.732 11:35:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.992 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.992 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:08.992 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.992 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.992 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.316 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.316 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:09.316 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.316 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.316 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.592 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.592 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:09.592 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.592 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.592 11:35:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.851 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.851 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:09.851 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.851 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.851 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.110 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.110 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:10.110 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.110 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.110 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.677 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.677 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:10.677 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.677 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.677 11:35:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.936 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.936 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:10.936 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.936 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.936 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.194 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.194 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:11.194 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.194 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.194 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.453 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.453 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:11.453 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.453 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.453 11:35:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.022 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.022 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:12.022 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.022 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.022 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.280 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.280 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:12.280 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.280 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.280 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.538 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.538 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:12.538 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.539 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.539 11:35:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.797 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.797 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:12.797 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.797 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.797 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.055 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.055 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:13.055 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.055 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.055 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.621 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.621 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:13.621 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.621 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.621 11:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.880 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.880 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:13.880 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.880 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.880 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.138 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.138 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:14.138 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.138 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.138 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.396 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.396 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:14.396 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.396 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.396 11:35:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.963 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.963 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:14.963 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.963 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.963 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.221 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.221 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:15.221 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.222 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.222 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.480 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.480 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:15.480 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.480 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.480 11:35:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.739 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.739 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:15.739 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.739 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.739 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.000 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.261 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:16.261 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.261 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.261 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.519 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:16.519 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.519 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.519 11:35:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.778 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1596772 00:12:16.778 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1596772) - No such process 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1596772 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:12:16.778 rmmod nvme_rdma 00:12:16.778 rmmod nvme_fabrics 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 1596591 ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 1596591 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1596591 ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1596591 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596591 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596591' 00:12:16.778 killing process with pid 1596591 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1596591 00:12:16.778 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1596591 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@264 -- # local dev 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # return 0 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@284 -- # iptr 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-save 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-restore 00:12:17.037 00:12:17.037 real 0m17.865s 00:12:17.037 user 0m40.313s 00:12:17.037 sys 0m7.435s 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.037 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.037 ************************************ 00:12:17.037 END TEST nvmf_connect_stress 00:12:17.037 ************************************ 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.296 ************************************ 00:12:17.296 START TEST nvmf_fused_ordering 00:12:17.296 ************************************ 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:17.296 * Looking for test storage... 00:12:17.296 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.296 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:17.555 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.556 --rc genhtml_branch_coverage=1 00:12:17.556 --rc genhtml_function_coverage=1 00:12:17.556 --rc genhtml_legend=1 00:12:17.556 --rc geninfo_all_blocks=1 00:12:17.556 --rc geninfo_unexecuted_blocks=1 00:12:17.556 00:12:17.556 ' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.556 --rc genhtml_branch_coverage=1 00:12:17.556 --rc genhtml_function_coverage=1 00:12:17.556 --rc genhtml_legend=1 00:12:17.556 --rc geninfo_all_blocks=1 00:12:17.556 --rc geninfo_unexecuted_blocks=1 00:12:17.556 00:12:17.556 ' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.556 --rc genhtml_branch_coverage=1 00:12:17.556 --rc genhtml_function_coverage=1 00:12:17.556 --rc genhtml_legend=1 00:12:17.556 --rc geninfo_all_blocks=1 00:12:17.556 --rc geninfo_unexecuted_blocks=1 00:12:17.556 00:12:17.556 ' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.556 --rc genhtml_branch_coverage=1 00:12:17.556 --rc genhtml_function_coverage=1 00:12:17.556 --rc genhtml_legend=1 00:12:17.556 --rc geninfo_all_blocks=1 00:12:17.556 --rc geninfo_unexecuted_blocks=1 00:12:17.556 00:12:17.556 ' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:17.556 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:17.556 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.557 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:17.557 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:17.557 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:12:17.557 11:35:20 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:24.124 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:24.124 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:24.125 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:24.125 Found net devices under 0000:18:00.0: mlx_0_0 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:24.125 Found net devices under 0000:18:00.1: mlx_0_1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # get_rdma_if_list 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # rdma_devs=() 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@89 -- # continue 2 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@89 -- # continue 2 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@61 -- # uname 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_cm 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_core 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_umad 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe iw_cm 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # key_initiator=target1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:12:24.125 10.0.0.1 00:12:24.125 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:12:24.126 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:24.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:12:24.126 00:12:24.126 --- 10.0.0.2 ping statistics --- 00:12:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.126 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:24.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:12:24.126 00:12:24.126 --- 10.0.0.2 ping statistics --- 00:12:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.126 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:12:24.126 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=1600985 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 1600985 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1600985 ']' 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.127 11:35:26 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 [2024-11-20 11:35:26.999509] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:24.127 [2024-11-20 11:35:26.999581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.127 [2024-11-20 11:35:27.078669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.127 [2024-11-20 11:35:27.128548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.127 [2024-11-20 11:35:27.128587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.127 [2024-11-20 11:35:27.128598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.127 [2024-11-20 11:35:27.128607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.127 [2024-11-20 11:35:27.128615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.127 [2024-11-20 11:35:27.129028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 [2024-11-20 11:35:27.295524] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ece2f0/0x1ed27e0) succeed. 00:12:24.127 [2024-11-20 11:35:27.304403] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ecf7a0/0x1f13e80) succeed. 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 [2024-11-20 11:35:27.348429] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.127 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.127 NULL1 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.128 11:35:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:24.128 [2024-11-20 11:35:27.404441] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:24.128 [2024-11-20 11:35:27.404484] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601029 ] 00:12:24.387 Attached to nqn.2016-06.io.spdk:cnode1 00:12:24.387 Namespace ID: 1 size: 1GB 00:12:24.387 fused_ordering(0) 00:12:24.387 fused_ordering(1) 00:12:24.387 fused_ordering(2) 00:12:24.387 fused_ordering(3) 00:12:24.387 fused_ordering(4) 00:12:24.387 fused_ordering(5) 00:12:24.387 fused_ordering(6) 00:12:24.387 fused_ordering(7) 00:12:24.387 fused_ordering(8) 00:12:24.387 fused_ordering(9) 00:12:24.387 fused_ordering(10) 00:12:24.387 fused_ordering(11) 00:12:24.387 fused_ordering(12) 00:12:24.387 fused_ordering(13) 00:12:24.387 fused_ordering(14) 00:12:24.387 fused_ordering(15) 00:12:24.387 fused_ordering(16) 00:12:24.387 fused_ordering(17) 00:12:24.387 fused_ordering(18) 00:12:24.387 fused_ordering(19) 00:12:24.387 fused_ordering(20) 00:12:24.387 fused_ordering(21) 00:12:24.387 fused_ordering(22) 00:12:24.387 fused_ordering(23) 00:12:24.387 fused_ordering(24) 00:12:24.387 fused_ordering(25) 00:12:24.387 fused_ordering(26) 00:12:24.387 fused_ordering(27) 00:12:24.387 fused_ordering(28) 00:12:24.387 fused_ordering(29) 00:12:24.387 fused_ordering(30) 00:12:24.387 fused_ordering(31) 00:12:24.387 fused_ordering(32) 00:12:24.387 fused_ordering(33) 00:12:24.387 fused_ordering(34) 00:12:24.387 fused_ordering(35) 00:12:24.387 fused_ordering(36) 00:12:24.387 fused_ordering(37) 00:12:24.387 fused_ordering(38) 00:12:24.387 fused_ordering(39) 00:12:24.387 fused_ordering(40) 00:12:24.387 fused_ordering(41) 00:12:24.387 fused_ordering(42) 00:12:24.387 fused_ordering(43) 00:12:24.387 fused_ordering(44) 00:12:24.387 fused_ordering(45) 00:12:24.387 fused_ordering(46) 00:12:24.387 fused_ordering(47) 00:12:24.387 fused_ordering(48) 00:12:24.387 fused_ordering(49) 00:12:24.387 fused_ordering(50) 00:12:24.387 fused_ordering(51) 00:12:24.387 fused_ordering(52) 00:12:24.387 fused_ordering(53) 00:12:24.387 fused_ordering(54) 00:12:24.387 fused_ordering(55) 00:12:24.387 fused_ordering(56) 00:12:24.387 fused_ordering(57) 00:12:24.387 fused_ordering(58) 00:12:24.387 fused_ordering(59) 00:12:24.387 fused_ordering(60) 00:12:24.387 fused_ordering(61) 00:12:24.387 fused_ordering(62) 00:12:24.387 fused_ordering(63) 00:12:24.387 fused_ordering(64) 00:12:24.387 fused_ordering(65) 00:12:24.387 fused_ordering(66) 00:12:24.387 fused_ordering(67) 00:12:24.387 fused_ordering(68) 00:12:24.387 fused_ordering(69) 00:12:24.387 fused_ordering(70) 00:12:24.387 fused_ordering(71) 00:12:24.387 fused_ordering(72) 00:12:24.387 fused_ordering(73) 00:12:24.387 fused_ordering(74) 00:12:24.387 fused_ordering(75) 00:12:24.387 fused_ordering(76) 00:12:24.387 fused_ordering(77) 00:12:24.387 fused_ordering(78) 00:12:24.387 fused_ordering(79) 00:12:24.387 fused_ordering(80) 00:12:24.387 fused_ordering(81) 00:12:24.387 fused_ordering(82) 00:12:24.387 fused_ordering(83) 00:12:24.387 fused_ordering(84) 00:12:24.387 fused_ordering(85) 00:12:24.387 fused_ordering(86) 00:12:24.387 fused_ordering(87) 00:12:24.387 fused_ordering(88) 00:12:24.387 fused_ordering(89) 00:12:24.387 fused_ordering(90) 00:12:24.387 fused_ordering(91) 00:12:24.387 fused_ordering(92) 00:12:24.387 fused_ordering(93) 00:12:24.387 fused_ordering(94) 00:12:24.387 fused_ordering(95) 00:12:24.387 fused_ordering(96) 00:12:24.387 fused_ordering(97) 00:12:24.387 fused_ordering(98) 00:12:24.387 fused_ordering(99) 00:12:24.387 fused_ordering(100) 00:12:24.387 fused_ordering(101) 00:12:24.387 fused_ordering(102) 00:12:24.387 fused_ordering(103) 00:12:24.387 fused_ordering(104) 00:12:24.387 fused_ordering(105) 00:12:24.387 fused_ordering(106) 00:12:24.387 fused_ordering(107) 00:12:24.387 fused_ordering(108) 00:12:24.387 fused_ordering(109) 00:12:24.387 fused_ordering(110) 00:12:24.388 fused_ordering(111) 00:12:24.388 fused_ordering(112) 00:12:24.388 fused_ordering(113) 00:12:24.388 fused_ordering(114) 00:12:24.388 fused_ordering(115) 00:12:24.388 fused_ordering(116) 00:12:24.388 fused_ordering(117) 00:12:24.388 fused_ordering(118) 00:12:24.388 fused_ordering(119) 00:12:24.388 fused_ordering(120) 00:12:24.388 fused_ordering(121) 00:12:24.388 fused_ordering(122) 00:12:24.388 fused_ordering(123) 00:12:24.388 fused_ordering(124) 00:12:24.388 fused_ordering(125) 00:12:24.388 fused_ordering(126) 00:12:24.388 fused_ordering(127) 00:12:24.388 fused_ordering(128) 00:12:24.388 fused_ordering(129) 00:12:24.388 fused_ordering(130) 00:12:24.388 fused_ordering(131) 00:12:24.388 fused_ordering(132) 00:12:24.388 fused_ordering(133) 00:12:24.388 fused_ordering(134) 00:12:24.388 fused_ordering(135) 00:12:24.388 fused_ordering(136) 00:12:24.388 fused_ordering(137) 00:12:24.388 fused_ordering(138) 00:12:24.388 fused_ordering(139) 00:12:24.388 fused_ordering(140) 00:12:24.388 fused_ordering(141) 00:12:24.388 fused_ordering(142) 00:12:24.388 fused_ordering(143) 00:12:24.388 fused_ordering(144) 00:12:24.388 fused_ordering(145) 00:12:24.388 fused_ordering(146) 00:12:24.388 fused_ordering(147) 00:12:24.388 fused_ordering(148) 00:12:24.388 fused_ordering(149) 00:12:24.388 fused_ordering(150) 00:12:24.388 fused_ordering(151) 00:12:24.388 fused_ordering(152) 00:12:24.388 fused_ordering(153) 00:12:24.388 fused_ordering(154) 00:12:24.388 fused_ordering(155) 00:12:24.388 fused_ordering(156) 00:12:24.388 fused_ordering(157) 00:12:24.388 fused_ordering(158) 00:12:24.388 fused_ordering(159) 00:12:24.388 fused_ordering(160) 00:12:24.388 fused_ordering(161) 00:12:24.388 fused_ordering(162) 00:12:24.388 fused_ordering(163) 00:12:24.388 fused_ordering(164) 00:12:24.388 fused_ordering(165) 00:12:24.388 fused_ordering(166) 00:12:24.388 fused_ordering(167) 00:12:24.388 fused_ordering(168) 00:12:24.388 fused_ordering(169) 00:12:24.388 fused_ordering(170) 00:12:24.388 fused_ordering(171) 00:12:24.388 fused_ordering(172) 00:12:24.388 fused_ordering(173) 00:12:24.388 fused_ordering(174) 00:12:24.388 fused_ordering(175) 00:12:24.388 fused_ordering(176) 00:12:24.388 fused_ordering(177) 00:12:24.388 fused_ordering(178) 00:12:24.388 fused_ordering(179) 00:12:24.388 fused_ordering(180) 00:12:24.388 fused_ordering(181) 00:12:24.388 fused_ordering(182) 00:12:24.388 fused_ordering(183) 00:12:24.388 fused_ordering(184) 00:12:24.388 fused_ordering(185) 00:12:24.388 fused_ordering(186) 00:12:24.388 fused_ordering(187) 00:12:24.388 fused_ordering(188) 00:12:24.388 fused_ordering(189) 00:12:24.388 fused_ordering(190) 00:12:24.388 fused_ordering(191) 00:12:24.388 fused_ordering(192) 00:12:24.388 fused_ordering(193) 00:12:24.388 fused_ordering(194) 00:12:24.388 fused_ordering(195) 00:12:24.388 fused_ordering(196) 00:12:24.388 fused_ordering(197) 00:12:24.388 fused_ordering(198) 00:12:24.388 fused_ordering(199) 00:12:24.388 fused_ordering(200) 00:12:24.388 fused_ordering(201) 00:12:24.388 fused_ordering(202) 00:12:24.388 fused_ordering(203) 00:12:24.388 fused_ordering(204) 00:12:24.388 fused_ordering(205) 00:12:24.388 fused_ordering(206) 00:12:24.388 fused_ordering(207) 00:12:24.388 fused_ordering(208) 00:12:24.388 fused_ordering(209) 00:12:24.388 fused_ordering(210) 00:12:24.388 fused_ordering(211) 00:12:24.388 fused_ordering(212) 00:12:24.388 fused_ordering(213) 00:12:24.388 fused_ordering(214) 00:12:24.388 fused_ordering(215) 00:12:24.388 fused_ordering(216) 00:12:24.388 fused_ordering(217) 00:12:24.388 fused_ordering(218) 00:12:24.388 fused_ordering(219) 00:12:24.388 fused_ordering(220) 00:12:24.388 fused_ordering(221) 00:12:24.388 fused_ordering(222) 00:12:24.388 fused_ordering(223) 00:12:24.388 fused_ordering(224) 00:12:24.388 fused_ordering(225) 00:12:24.388 fused_ordering(226) 00:12:24.388 fused_ordering(227) 00:12:24.388 fused_ordering(228) 00:12:24.388 fused_ordering(229) 00:12:24.388 fused_ordering(230) 00:12:24.388 fused_ordering(231) 00:12:24.388 fused_ordering(232) 00:12:24.388 fused_ordering(233) 00:12:24.388 fused_ordering(234) 00:12:24.388 fused_ordering(235) 00:12:24.388 fused_ordering(236) 00:12:24.388 fused_ordering(237) 00:12:24.388 fused_ordering(238) 00:12:24.388 fused_ordering(239) 00:12:24.388 fused_ordering(240) 00:12:24.388 fused_ordering(241) 00:12:24.388 fused_ordering(242) 00:12:24.388 fused_ordering(243) 00:12:24.388 fused_ordering(244) 00:12:24.388 fused_ordering(245) 00:12:24.388 fused_ordering(246) 00:12:24.388 fused_ordering(247) 00:12:24.388 fused_ordering(248) 00:12:24.388 fused_ordering(249) 00:12:24.388 fused_ordering(250) 00:12:24.388 fused_ordering(251) 00:12:24.388 fused_ordering(252) 00:12:24.388 fused_ordering(253) 00:12:24.388 fused_ordering(254) 00:12:24.388 fused_ordering(255) 00:12:24.388 fused_ordering(256) 00:12:24.388 fused_ordering(257) 00:12:24.388 fused_ordering(258) 00:12:24.388 fused_ordering(259) 00:12:24.388 fused_ordering(260) 00:12:24.388 fused_ordering(261) 00:12:24.388 fused_ordering(262) 00:12:24.388 fused_ordering(263) 00:12:24.388 fused_ordering(264) 00:12:24.388 fused_ordering(265) 00:12:24.388 fused_ordering(266) 00:12:24.388 fused_ordering(267) 00:12:24.388 fused_ordering(268) 00:12:24.388 fused_ordering(269) 00:12:24.388 fused_ordering(270) 00:12:24.388 fused_ordering(271) 00:12:24.388 fused_ordering(272) 00:12:24.388 fused_ordering(273) 00:12:24.388 fused_ordering(274) 00:12:24.388 fused_ordering(275) 00:12:24.388 fused_ordering(276) 00:12:24.388 fused_ordering(277) 00:12:24.388 fused_ordering(278) 00:12:24.388 fused_ordering(279) 00:12:24.388 fused_ordering(280) 00:12:24.388 fused_ordering(281) 00:12:24.388 fused_ordering(282) 00:12:24.388 fused_ordering(283) 00:12:24.388 fused_ordering(284) 00:12:24.388 fused_ordering(285) 00:12:24.388 fused_ordering(286) 00:12:24.388 fused_ordering(287) 00:12:24.388 fused_ordering(288) 00:12:24.388 fused_ordering(289) 00:12:24.388 fused_ordering(290) 00:12:24.388 fused_ordering(291) 00:12:24.388 fused_ordering(292) 00:12:24.388 fused_ordering(293) 00:12:24.388 fused_ordering(294) 00:12:24.388 fused_ordering(295) 00:12:24.388 fused_ordering(296) 00:12:24.388 fused_ordering(297) 00:12:24.388 fused_ordering(298) 00:12:24.388 fused_ordering(299) 00:12:24.388 fused_ordering(300) 00:12:24.388 fused_ordering(301) 00:12:24.388 fused_ordering(302) 00:12:24.388 fused_ordering(303) 00:12:24.388 fused_ordering(304) 00:12:24.388 fused_ordering(305) 00:12:24.388 fused_ordering(306) 00:12:24.388 fused_ordering(307) 00:12:24.388 fused_ordering(308) 00:12:24.388 fused_ordering(309) 00:12:24.388 fused_ordering(310) 00:12:24.388 fused_ordering(311) 00:12:24.388 fused_ordering(312) 00:12:24.388 fused_ordering(313) 00:12:24.388 fused_ordering(314) 00:12:24.388 fused_ordering(315) 00:12:24.388 fused_ordering(316) 00:12:24.388 fused_ordering(317) 00:12:24.388 fused_ordering(318) 00:12:24.388 fused_ordering(319) 00:12:24.388 fused_ordering(320) 00:12:24.388 fused_ordering(321) 00:12:24.388 fused_ordering(322) 00:12:24.388 fused_ordering(323) 00:12:24.388 fused_ordering(324) 00:12:24.388 fused_ordering(325) 00:12:24.388 fused_ordering(326) 00:12:24.388 fused_ordering(327) 00:12:24.388 fused_ordering(328) 00:12:24.388 fused_ordering(329) 00:12:24.388 fused_ordering(330) 00:12:24.388 fused_ordering(331) 00:12:24.388 fused_ordering(332) 00:12:24.388 fused_ordering(333) 00:12:24.388 fused_ordering(334) 00:12:24.388 fused_ordering(335) 00:12:24.388 fused_ordering(336) 00:12:24.388 fused_ordering(337) 00:12:24.388 fused_ordering(338) 00:12:24.388 fused_ordering(339) 00:12:24.388 fused_ordering(340) 00:12:24.388 fused_ordering(341) 00:12:24.388 fused_ordering(342) 00:12:24.388 fused_ordering(343) 00:12:24.388 fused_ordering(344) 00:12:24.388 fused_ordering(345) 00:12:24.388 fused_ordering(346) 00:12:24.388 fused_ordering(347) 00:12:24.388 fused_ordering(348) 00:12:24.388 fused_ordering(349) 00:12:24.388 fused_ordering(350) 00:12:24.388 fused_ordering(351) 00:12:24.388 fused_ordering(352) 00:12:24.388 fused_ordering(353) 00:12:24.388 fused_ordering(354) 00:12:24.388 fused_ordering(355) 00:12:24.388 fused_ordering(356) 00:12:24.388 fused_ordering(357) 00:12:24.388 fused_ordering(358) 00:12:24.388 fused_ordering(359) 00:12:24.388 fused_ordering(360) 00:12:24.388 fused_ordering(361) 00:12:24.388 fused_ordering(362) 00:12:24.388 fused_ordering(363) 00:12:24.388 fused_ordering(364) 00:12:24.388 fused_ordering(365) 00:12:24.388 fused_ordering(366) 00:12:24.388 fused_ordering(367) 00:12:24.388 fused_ordering(368) 00:12:24.388 fused_ordering(369) 00:12:24.388 fused_ordering(370) 00:12:24.388 fused_ordering(371) 00:12:24.388 fused_ordering(372) 00:12:24.388 fused_ordering(373) 00:12:24.388 fused_ordering(374) 00:12:24.388 fused_ordering(375) 00:12:24.388 fused_ordering(376) 00:12:24.388 fused_ordering(377) 00:12:24.388 fused_ordering(378) 00:12:24.388 fused_ordering(379) 00:12:24.388 fused_ordering(380) 00:12:24.388 fused_ordering(381) 00:12:24.388 fused_ordering(382) 00:12:24.388 fused_ordering(383) 00:12:24.388 fused_ordering(384) 00:12:24.388 fused_ordering(385) 00:12:24.388 fused_ordering(386) 00:12:24.388 fused_ordering(387) 00:12:24.389 fused_ordering(388) 00:12:24.389 fused_ordering(389) 00:12:24.389 fused_ordering(390) 00:12:24.389 fused_ordering(391) 00:12:24.389 fused_ordering(392) 00:12:24.389 fused_ordering(393) 00:12:24.389 fused_ordering(394) 00:12:24.389 fused_ordering(395) 00:12:24.389 fused_ordering(396) 00:12:24.389 fused_ordering(397) 00:12:24.389 fused_ordering(398) 00:12:24.389 fused_ordering(399) 00:12:24.389 fused_ordering(400) 00:12:24.389 fused_ordering(401) 00:12:24.389 fused_ordering(402) 00:12:24.389 fused_ordering(403) 00:12:24.389 fused_ordering(404) 00:12:24.389 fused_ordering(405) 00:12:24.389 fused_ordering(406) 00:12:24.389 fused_ordering(407) 00:12:24.389 fused_ordering(408) 00:12:24.389 fused_ordering(409) 00:12:24.389 fused_ordering(410) 00:12:24.389 fused_ordering(411) 00:12:24.389 fused_ordering(412) 00:12:24.389 fused_ordering(413) 00:12:24.389 fused_ordering(414) 00:12:24.389 fused_ordering(415) 00:12:24.389 fused_ordering(416) 00:12:24.389 fused_ordering(417) 00:12:24.389 fused_ordering(418) 00:12:24.389 fused_ordering(419) 00:12:24.389 fused_ordering(420) 00:12:24.389 fused_ordering(421) 00:12:24.389 fused_ordering(422) 00:12:24.389 fused_ordering(423) 00:12:24.389 fused_ordering(424) 00:12:24.389 fused_ordering(425) 00:12:24.389 fused_ordering(426) 00:12:24.389 fused_ordering(427) 00:12:24.389 fused_ordering(428) 00:12:24.389 fused_ordering(429) 00:12:24.389 fused_ordering(430) 00:12:24.389 fused_ordering(431) 00:12:24.389 fused_ordering(432) 00:12:24.389 fused_ordering(433) 00:12:24.389 fused_ordering(434) 00:12:24.389 fused_ordering(435) 00:12:24.389 fused_ordering(436) 00:12:24.389 fused_ordering(437) 00:12:24.389 fused_ordering(438) 00:12:24.389 fused_ordering(439) 00:12:24.389 fused_ordering(440) 00:12:24.389 fused_ordering(441) 00:12:24.389 fused_ordering(442) 00:12:24.389 fused_ordering(443) 00:12:24.389 fused_ordering(444) 00:12:24.389 fused_ordering(445) 00:12:24.389 fused_ordering(446) 00:12:24.389 fused_ordering(447) 00:12:24.389 fused_ordering(448) 00:12:24.389 fused_ordering(449) 00:12:24.389 fused_ordering(450) 00:12:24.389 fused_ordering(451) 00:12:24.389 fused_ordering(452) 00:12:24.389 fused_ordering(453) 00:12:24.389 fused_ordering(454) 00:12:24.389 fused_ordering(455) 00:12:24.389 fused_ordering(456) 00:12:24.389 fused_ordering(457) 00:12:24.389 fused_ordering(458) 00:12:24.389 fused_ordering(459) 00:12:24.389 fused_ordering(460) 00:12:24.389 fused_ordering(461) 00:12:24.389 fused_ordering(462) 00:12:24.389 fused_ordering(463) 00:12:24.389 fused_ordering(464) 00:12:24.389 fused_ordering(465) 00:12:24.389 fused_ordering(466) 00:12:24.389 fused_ordering(467) 00:12:24.389 fused_ordering(468) 00:12:24.389 fused_ordering(469) 00:12:24.389 fused_ordering(470) 00:12:24.389 fused_ordering(471) 00:12:24.389 fused_ordering(472) 00:12:24.389 fused_ordering(473) 00:12:24.389 fused_ordering(474) 00:12:24.389 fused_ordering(475) 00:12:24.389 fused_ordering(476) 00:12:24.389 fused_ordering(477) 00:12:24.389 fused_ordering(478) 00:12:24.389 fused_ordering(479) 00:12:24.389 fused_ordering(480) 00:12:24.389 fused_ordering(481) 00:12:24.389 fused_ordering(482) 00:12:24.389 fused_ordering(483) 00:12:24.389 fused_ordering(484) 00:12:24.389 fused_ordering(485) 00:12:24.389 fused_ordering(486) 00:12:24.389 fused_ordering(487) 00:12:24.389 fused_ordering(488) 00:12:24.389 fused_ordering(489) 00:12:24.389 fused_ordering(490) 00:12:24.389 fused_ordering(491) 00:12:24.389 fused_ordering(492) 00:12:24.389 fused_ordering(493) 00:12:24.389 fused_ordering(494) 00:12:24.389 fused_ordering(495) 00:12:24.389 fused_ordering(496) 00:12:24.389 fused_ordering(497) 00:12:24.389 fused_ordering(498) 00:12:24.389 fused_ordering(499) 00:12:24.389 fused_ordering(500) 00:12:24.389 fused_ordering(501) 00:12:24.389 fused_ordering(502) 00:12:24.389 fused_ordering(503) 00:12:24.389 fused_ordering(504) 00:12:24.389 fused_ordering(505) 00:12:24.389 fused_ordering(506) 00:12:24.389 fused_ordering(507) 00:12:24.389 fused_ordering(508) 00:12:24.389 fused_ordering(509) 00:12:24.389 fused_ordering(510) 00:12:24.389 fused_ordering(511) 00:12:24.389 fused_ordering(512) 00:12:24.389 fused_ordering(513) 00:12:24.389 fused_ordering(514) 00:12:24.389 fused_ordering(515) 00:12:24.389 fused_ordering(516) 00:12:24.389 fused_ordering(517) 00:12:24.389 fused_ordering(518) 00:12:24.389 fused_ordering(519) 00:12:24.389 fused_ordering(520) 00:12:24.389 fused_ordering(521) 00:12:24.389 fused_ordering(522) 00:12:24.389 fused_ordering(523) 00:12:24.389 fused_ordering(524) 00:12:24.389 fused_ordering(525) 00:12:24.389 fused_ordering(526) 00:12:24.389 fused_ordering(527) 00:12:24.389 fused_ordering(528) 00:12:24.389 fused_ordering(529) 00:12:24.389 fused_ordering(530) 00:12:24.389 fused_ordering(531) 00:12:24.389 fused_ordering(532) 00:12:24.389 fused_ordering(533) 00:12:24.389 fused_ordering(534) 00:12:24.389 fused_ordering(535) 00:12:24.389 fused_ordering(536) 00:12:24.389 fused_ordering(537) 00:12:24.389 fused_ordering(538) 00:12:24.389 fused_ordering(539) 00:12:24.389 fused_ordering(540) 00:12:24.389 fused_ordering(541) 00:12:24.389 fused_ordering(542) 00:12:24.389 fused_ordering(543) 00:12:24.389 fused_ordering(544) 00:12:24.389 fused_ordering(545) 00:12:24.389 fused_ordering(546) 00:12:24.389 fused_ordering(547) 00:12:24.389 fused_ordering(548) 00:12:24.389 fused_ordering(549) 00:12:24.389 fused_ordering(550) 00:12:24.389 fused_ordering(551) 00:12:24.389 fused_ordering(552) 00:12:24.389 fused_ordering(553) 00:12:24.389 fused_ordering(554) 00:12:24.389 fused_ordering(555) 00:12:24.389 fused_ordering(556) 00:12:24.389 fused_ordering(557) 00:12:24.389 fused_ordering(558) 00:12:24.389 fused_ordering(559) 00:12:24.389 fused_ordering(560) 00:12:24.389 fused_ordering(561) 00:12:24.389 fused_ordering(562) 00:12:24.389 fused_ordering(563) 00:12:24.389 fused_ordering(564) 00:12:24.389 fused_ordering(565) 00:12:24.389 fused_ordering(566) 00:12:24.389 fused_ordering(567) 00:12:24.389 fused_ordering(568) 00:12:24.389 fused_ordering(569) 00:12:24.389 fused_ordering(570) 00:12:24.389 fused_ordering(571) 00:12:24.389 fused_ordering(572) 00:12:24.389 fused_ordering(573) 00:12:24.389 fused_ordering(574) 00:12:24.389 fused_ordering(575) 00:12:24.389 fused_ordering(576) 00:12:24.389 fused_ordering(577) 00:12:24.389 fused_ordering(578) 00:12:24.389 fused_ordering(579) 00:12:24.389 fused_ordering(580) 00:12:24.389 fused_ordering(581) 00:12:24.389 fused_ordering(582) 00:12:24.389 fused_ordering(583) 00:12:24.389 fused_ordering(584) 00:12:24.389 fused_ordering(585) 00:12:24.389 fused_ordering(586) 00:12:24.389 fused_ordering(587) 00:12:24.389 fused_ordering(588) 00:12:24.389 fused_ordering(589) 00:12:24.389 fused_ordering(590) 00:12:24.389 fused_ordering(591) 00:12:24.389 fused_ordering(592) 00:12:24.389 fused_ordering(593) 00:12:24.389 fused_ordering(594) 00:12:24.389 fused_ordering(595) 00:12:24.389 fused_ordering(596) 00:12:24.389 fused_ordering(597) 00:12:24.389 fused_ordering(598) 00:12:24.389 fused_ordering(599) 00:12:24.389 fused_ordering(600) 00:12:24.389 fused_ordering(601) 00:12:24.389 fused_ordering(602) 00:12:24.389 fused_ordering(603) 00:12:24.389 fused_ordering(604) 00:12:24.389 fused_ordering(605) 00:12:24.389 fused_ordering(606) 00:12:24.389 fused_ordering(607) 00:12:24.389 fused_ordering(608) 00:12:24.389 fused_ordering(609) 00:12:24.389 fused_ordering(610) 00:12:24.389 fused_ordering(611) 00:12:24.389 fused_ordering(612) 00:12:24.389 fused_ordering(613) 00:12:24.389 fused_ordering(614) 00:12:24.389 fused_ordering(615) 00:12:24.649 fused_ordering(616) 00:12:24.649 fused_ordering(617) 00:12:24.649 fused_ordering(618) 00:12:24.649 fused_ordering(619) 00:12:24.649 fused_ordering(620) 00:12:24.649 fused_ordering(621) 00:12:24.649 fused_ordering(622) 00:12:24.649 fused_ordering(623) 00:12:24.649 fused_ordering(624) 00:12:24.649 fused_ordering(625) 00:12:24.649 fused_ordering(626) 00:12:24.649 fused_ordering(627) 00:12:24.649 fused_ordering(628) 00:12:24.649 fused_ordering(629) 00:12:24.649 fused_ordering(630) 00:12:24.649 fused_ordering(631) 00:12:24.649 fused_ordering(632) 00:12:24.649 fused_ordering(633) 00:12:24.649 fused_ordering(634) 00:12:24.649 fused_ordering(635) 00:12:24.649 fused_ordering(636) 00:12:24.649 fused_ordering(637) 00:12:24.649 fused_ordering(638) 00:12:24.649 fused_ordering(639) 00:12:24.649 fused_ordering(640) 00:12:24.649 fused_ordering(641) 00:12:24.649 fused_ordering(642) 00:12:24.649 fused_ordering(643) 00:12:24.649 fused_ordering(644) 00:12:24.649 fused_ordering(645) 00:12:24.649 fused_ordering(646) 00:12:24.649 fused_ordering(647) 00:12:24.649 fused_ordering(648) 00:12:24.649 fused_ordering(649) 00:12:24.649 fused_ordering(650) 00:12:24.649 fused_ordering(651) 00:12:24.649 fused_ordering(652) 00:12:24.649 fused_ordering(653) 00:12:24.649 fused_ordering(654) 00:12:24.649 fused_ordering(655) 00:12:24.649 fused_ordering(656) 00:12:24.649 fused_ordering(657) 00:12:24.649 fused_ordering(658) 00:12:24.649 fused_ordering(659) 00:12:24.649 fused_ordering(660) 00:12:24.649 fused_ordering(661) 00:12:24.649 fused_ordering(662) 00:12:24.649 fused_ordering(663) 00:12:24.649 fused_ordering(664) 00:12:24.649 fused_ordering(665) 00:12:24.649 fused_ordering(666) 00:12:24.649 fused_ordering(667) 00:12:24.649 fused_ordering(668) 00:12:24.649 fused_ordering(669) 00:12:24.649 fused_ordering(670) 00:12:24.649 fused_ordering(671) 00:12:24.649 fused_ordering(672) 00:12:24.649 fused_ordering(673) 00:12:24.649 fused_ordering(674) 00:12:24.649 fused_ordering(675) 00:12:24.649 fused_ordering(676) 00:12:24.649 fused_ordering(677) 00:12:24.649 fused_ordering(678) 00:12:24.649 fused_ordering(679) 00:12:24.649 fused_ordering(680) 00:12:24.649 fused_ordering(681) 00:12:24.649 fused_ordering(682) 00:12:24.649 fused_ordering(683) 00:12:24.649 fused_ordering(684) 00:12:24.649 fused_ordering(685) 00:12:24.649 fused_ordering(686) 00:12:24.649 fused_ordering(687) 00:12:24.649 fused_ordering(688) 00:12:24.649 fused_ordering(689) 00:12:24.649 fused_ordering(690) 00:12:24.649 fused_ordering(691) 00:12:24.649 fused_ordering(692) 00:12:24.649 fused_ordering(693) 00:12:24.649 fused_ordering(694) 00:12:24.649 fused_ordering(695) 00:12:24.649 fused_ordering(696) 00:12:24.649 fused_ordering(697) 00:12:24.649 fused_ordering(698) 00:12:24.649 fused_ordering(699) 00:12:24.649 fused_ordering(700) 00:12:24.649 fused_ordering(701) 00:12:24.649 fused_ordering(702) 00:12:24.649 fused_ordering(703) 00:12:24.649 fused_ordering(704) 00:12:24.649 fused_ordering(705) 00:12:24.649 fused_ordering(706) 00:12:24.649 fused_ordering(707) 00:12:24.649 fused_ordering(708) 00:12:24.649 fused_ordering(709) 00:12:24.649 fused_ordering(710) 00:12:24.649 fused_ordering(711) 00:12:24.649 fused_ordering(712) 00:12:24.649 fused_ordering(713) 00:12:24.649 fused_ordering(714) 00:12:24.649 fused_ordering(715) 00:12:24.649 fused_ordering(716) 00:12:24.649 fused_ordering(717) 00:12:24.649 fused_ordering(718) 00:12:24.649 fused_ordering(719) 00:12:24.649 fused_ordering(720) 00:12:24.649 fused_ordering(721) 00:12:24.649 fused_ordering(722) 00:12:24.649 fused_ordering(723) 00:12:24.649 fused_ordering(724) 00:12:24.649 fused_ordering(725) 00:12:24.649 fused_ordering(726) 00:12:24.649 fused_ordering(727) 00:12:24.649 fused_ordering(728) 00:12:24.649 fused_ordering(729) 00:12:24.649 fused_ordering(730) 00:12:24.649 fused_ordering(731) 00:12:24.649 fused_ordering(732) 00:12:24.649 fused_ordering(733) 00:12:24.649 fused_ordering(734) 00:12:24.649 fused_ordering(735) 00:12:24.649 fused_ordering(736) 00:12:24.649 fused_ordering(737) 00:12:24.649 fused_ordering(738) 00:12:24.649 fused_ordering(739) 00:12:24.649 fused_ordering(740) 00:12:24.649 fused_ordering(741) 00:12:24.649 fused_ordering(742) 00:12:24.649 fused_ordering(743) 00:12:24.649 fused_ordering(744) 00:12:24.649 fused_ordering(745) 00:12:24.649 fused_ordering(746) 00:12:24.649 fused_ordering(747) 00:12:24.649 fused_ordering(748) 00:12:24.649 fused_ordering(749) 00:12:24.649 fused_ordering(750) 00:12:24.649 fused_ordering(751) 00:12:24.649 fused_ordering(752) 00:12:24.649 fused_ordering(753) 00:12:24.649 fused_ordering(754) 00:12:24.649 fused_ordering(755) 00:12:24.649 fused_ordering(756) 00:12:24.649 fused_ordering(757) 00:12:24.649 fused_ordering(758) 00:12:24.649 fused_ordering(759) 00:12:24.649 fused_ordering(760) 00:12:24.649 fused_ordering(761) 00:12:24.649 fused_ordering(762) 00:12:24.649 fused_ordering(763) 00:12:24.649 fused_ordering(764) 00:12:24.649 fused_ordering(765) 00:12:24.649 fused_ordering(766) 00:12:24.649 fused_ordering(767) 00:12:24.649 fused_ordering(768) 00:12:24.649 fused_ordering(769) 00:12:24.649 fused_ordering(770) 00:12:24.649 fused_ordering(771) 00:12:24.649 fused_ordering(772) 00:12:24.649 fused_ordering(773) 00:12:24.649 fused_ordering(774) 00:12:24.649 fused_ordering(775) 00:12:24.649 fused_ordering(776) 00:12:24.649 fused_ordering(777) 00:12:24.649 fused_ordering(778) 00:12:24.649 fused_ordering(779) 00:12:24.649 fused_ordering(780) 00:12:24.649 fused_ordering(781) 00:12:24.649 fused_ordering(782) 00:12:24.649 fused_ordering(783) 00:12:24.649 fused_ordering(784) 00:12:24.649 fused_ordering(785) 00:12:24.649 fused_ordering(786) 00:12:24.649 fused_ordering(787) 00:12:24.649 fused_ordering(788) 00:12:24.649 fused_ordering(789) 00:12:24.649 fused_ordering(790) 00:12:24.649 fused_ordering(791) 00:12:24.649 fused_ordering(792) 00:12:24.649 fused_ordering(793) 00:12:24.649 fused_ordering(794) 00:12:24.649 fused_ordering(795) 00:12:24.649 fused_ordering(796) 00:12:24.649 fused_ordering(797) 00:12:24.649 fused_ordering(798) 00:12:24.649 fused_ordering(799) 00:12:24.649 fused_ordering(800) 00:12:24.649 fused_ordering(801) 00:12:24.649 fused_ordering(802) 00:12:24.649 fused_ordering(803) 00:12:24.649 fused_ordering(804) 00:12:24.649 fused_ordering(805) 00:12:24.649 fused_ordering(806) 00:12:24.649 fused_ordering(807) 00:12:24.649 fused_ordering(808) 00:12:24.649 fused_ordering(809) 00:12:24.649 fused_ordering(810) 00:12:24.649 fused_ordering(811) 00:12:24.649 fused_ordering(812) 00:12:24.649 fused_ordering(813) 00:12:24.649 fused_ordering(814) 00:12:24.649 fused_ordering(815) 00:12:24.649 fused_ordering(816) 00:12:24.649 fused_ordering(817) 00:12:24.649 fused_ordering(818) 00:12:24.649 fused_ordering(819) 00:12:24.649 fused_ordering(820) 00:12:24.649 fused_ordering(821) 00:12:24.649 fused_ordering(822) 00:12:24.649 fused_ordering(823) 00:12:24.649 fused_ordering(824) 00:12:24.649 fused_ordering(825) 00:12:24.649 fused_ordering(826) 00:12:24.649 fused_ordering(827) 00:12:24.649 fused_ordering(828) 00:12:24.650 fused_ordering(829) 00:12:24.650 fused_ordering(830) 00:12:24.650 fused_ordering(831) 00:12:24.650 fused_ordering(832) 00:12:24.650 fused_ordering(833) 00:12:24.650 fused_ordering(834) 00:12:24.650 fused_ordering(835) 00:12:24.650 fused_ordering(836) 00:12:24.650 fused_ordering(837) 00:12:24.650 fused_ordering(838) 00:12:24.650 fused_ordering(839) 00:12:24.650 fused_ordering(840) 00:12:24.650 fused_ordering(841) 00:12:24.650 fused_ordering(842) 00:12:24.650 fused_ordering(843) 00:12:24.650 fused_ordering(844) 00:12:24.650 fused_ordering(845) 00:12:24.650 fused_ordering(846) 00:12:24.650 fused_ordering(847) 00:12:24.650 fused_ordering(848) 00:12:24.650 fused_ordering(849) 00:12:24.650 fused_ordering(850) 00:12:24.650 fused_ordering(851) 00:12:24.650 fused_ordering(852) 00:12:24.650 fused_ordering(853) 00:12:24.650 fused_ordering(854) 00:12:24.650 fused_ordering(855) 00:12:24.650 fused_ordering(856) 00:12:24.650 fused_ordering(857) 00:12:24.650 fused_ordering(858) 00:12:24.650 fused_ordering(859) 00:12:24.650 fused_ordering(860) 00:12:24.650 fused_ordering(861) 00:12:24.650 fused_ordering(862) 00:12:24.650 fused_ordering(863) 00:12:24.650 fused_ordering(864) 00:12:24.650 fused_ordering(865) 00:12:24.650 fused_ordering(866) 00:12:24.650 fused_ordering(867) 00:12:24.650 fused_ordering(868) 00:12:24.650 fused_ordering(869) 00:12:24.650 fused_ordering(870) 00:12:24.650 fused_ordering(871) 00:12:24.650 fused_ordering(872) 00:12:24.650 fused_ordering(873) 00:12:24.650 fused_ordering(874) 00:12:24.650 fused_ordering(875) 00:12:24.650 fused_ordering(876) 00:12:24.650 fused_ordering(877) 00:12:24.650 fused_ordering(878) 00:12:24.650 fused_ordering(879) 00:12:24.650 fused_ordering(880) 00:12:24.650 fused_ordering(881) 00:12:24.650 fused_ordering(882) 00:12:24.650 fused_ordering(883) 00:12:24.650 fused_ordering(884) 00:12:24.650 fused_ordering(885) 00:12:24.650 fused_ordering(886) 00:12:24.650 fused_ordering(887) 00:12:24.650 fused_ordering(888) 00:12:24.650 fused_ordering(889) 00:12:24.650 fused_ordering(890) 00:12:24.650 fused_ordering(891) 00:12:24.650 fused_ordering(892) 00:12:24.650 fused_ordering(893) 00:12:24.650 fused_ordering(894) 00:12:24.650 fused_ordering(895) 00:12:24.650 fused_ordering(896) 00:12:24.650 fused_ordering(897) 00:12:24.650 fused_ordering(898) 00:12:24.650 fused_ordering(899) 00:12:24.650 fused_ordering(900) 00:12:24.650 fused_ordering(901) 00:12:24.650 fused_ordering(902) 00:12:24.650 fused_ordering(903) 00:12:24.650 fused_ordering(904) 00:12:24.650 fused_ordering(905) 00:12:24.650 fused_ordering(906) 00:12:24.650 fused_ordering(907) 00:12:24.650 fused_ordering(908) 00:12:24.650 fused_ordering(909) 00:12:24.650 fused_ordering(910) 00:12:24.650 fused_ordering(911) 00:12:24.650 fused_ordering(912) 00:12:24.650 fused_ordering(913) 00:12:24.650 fused_ordering(914) 00:12:24.650 fused_ordering(915) 00:12:24.650 fused_ordering(916) 00:12:24.650 fused_ordering(917) 00:12:24.650 fused_ordering(918) 00:12:24.650 fused_ordering(919) 00:12:24.650 fused_ordering(920) 00:12:24.650 fused_ordering(921) 00:12:24.650 fused_ordering(922) 00:12:24.650 fused_ordering(923) 00:12:24.650 fused_ordering(924) 00:12:24.650 fused_ordering(925) 00:12:24.650 fused_ordering(926) 00:12:24.650 fused_ordering(927) 00:12:24.650 fused_ordering(928) 00:12:24.650 fused_ordering(929) 00:12:24.650 fused_ordering(930) 00:12:24.650 fused_ordering(931) 00:12:24.650 fused_ordering(932) 00:12:24.650 fused_ordering(933) 00:12:24.650 fused_ordering(934) 00:12:24.650 fused_ordering(935) 00:12:24.650 fused_ordering(936) 00:12:24.650 fused_ordering(937) 00:12:24.650 fused_ordering(938) 00:12:24.650 fused_ordering(939) 00:12:24.650 fused_ordering(940) 00:12:24.650 fused_ordering(941) 00:12:24.650 fused_ordering(942) 00:12:24.650 fused_ordering(943) 00:12:24.650 fused_ordering(944) 00:12:24.650 fused_ordering(945) 00:12:24.650 fused_ordering(946) 00:12:24.650 fused_ordering(947) 00:12:24.650 fused_ordering(948) 00:12:24.650 fused_ordering(949) 00:12:24.650 fused_ordering(950) 00:12:24.650 fused_ordering(951) 00:12:24.650 fused_ordering(952) 00:12:24.650 fused_ordering(953) 00:12:24.650 fused_ordering(954) 00:12:24.650 fused_ordering(955) 00:12:24.650 fused_ordering(956) 00:12:24.650 fused_ordering(957) 00:12:24.650 fused_ordering(958) 00:12:24.650 fused_ordering(959) 00:12:24.650 fused_ordering(960) 00:12:24.650 fused_ordering(961) 00:12:24.650 fused_ordering(962) 00:12:24.650 fused_ordering(963) 00:12:24.650 fused_ordering(964) 00:12:24.650 fused_ordering(965) 00:12:24.650 fused_ordering(966) 00:12:24.650 fused_ordering(967) 00:12:24.650 fused_ordering(968) 00:12:24.650 fused_ordering(969) 00:12:24.650 fused_ordering(970) 00:12:24.650 fused_ordering(971) 00:12:24.650 fused_ordering(972) 00:12:24.650 fused_ordering(973) 00:12:24.650 fused_ordering(974) 00:12:24.650 fused_ordering(975) 00:12:24.650 fused_ordering(976) 00:12:24.650 fused_ordering(977) 00:12:24.650 fused_ordering(978) 00:12:24.650 fused_ordering(979) 00:12:24.650 fused_ordering(980) 00:12:24.650 fused_ordering(981) 00:12:24.650 fused_ordering(982) 00:12:24.650 fused_ordering(983) 00:12:24.650 fused_ordering(984) 00:12:24.650 fused_ordering(985) 00:12:24.650 fused_ordering(986) 00:12:24.650 fused_ordering(987) 00:12:24.650 fused_ordering(988) 00:12:24.650 fused_ordering(989) 00:12:24.650 fused_ordering(990) 00:12:24.650 fused_ordering(991) 00:12:24.650 fused_ordering(992) 00:12:24.650 fused_ordering(993) 00:12:24.650 fused_ordering(994) 00:12:24.650 fused_ordering(995) 00:12:24.650 fused_ordering(996) 00:12:24.650 fused_ordering(997) 00:12:24.650 fused_ordering(998) 00:12:24.650 fused_ordering(999) 00:12:24.650 fused_ordering(1000) 00:12:24.650 fused_ordering(1001) 00:12:24.650 fused_ordering(1002) 00:12:24.650 fused_ordering(1003) 00:12:24.650 fused_ordering(1004) 00:12:24.650 fused_ordering(1005) 00:12:24.650 fused_ordering(1006) 00:12:24.650 fused_ordering(1007) 00:12:24.650 fused_ordering(1008) 00:12:24.650 fused_ordering(1009) 00:12:24.650 fused_ordering(1010) 00:12:24.650 fused_ordering(1011) 00:12:24.650 fused_ordering(1012) 00:12:24.650 fused_ordering(1013) 00:12:24.650 fused_ordering(1014) 00:12:24.650 fused_ordering(1015) 00:12:24.650 fused_ordering(1016) 00:12:24.650 fused_ordering(1017) 00:12:24.650 fused_ordering(1018) 00:12:24.650 fused_ordering(1019) 00:12:24.650 fused_ordering(1020) 00:12:24.650 fused_ordering(1021) 00:12:24.650 fused_ordering(1022) 00:12:24.650 fused_ordering(1023) 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:12:24.650 rmmod nvme_rdma 00:12:24.650 rmmod nvme_fabrics 00:12:24.650 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 1600985 ']' 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 1600985 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1600985 ']' 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1600985 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600985 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600985' 00:12:24.909 killing process with pid 1600985 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1600985 00:12:24.909 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1600985 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@264 -- # local dev 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # return 0 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@284 -- # iptr 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-save 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-restore 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:25.168 00:12:25.168 real 0m7.844s 00:12:25.168 user 0m3.885s 00:12:25.168 sys 0m5.101s 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.168 ************************************ 00:12:25.168 END TEST nvmf_fused_ordering 00:12:25.168 ************************************ 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.168 ************************************ 00:12:25.168 START TEST nvmf_ns_masking 00:12:25.168 ************************************ 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:25.168 * Looking for test storage... 00:12:25.168 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.168 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.428 --rc genhtml_branch_coverage=1 00:12:25.428 --rc genhtml_function_coverage=1 00:12:25.428 --rc genhtml_legend=1 00:12:25.428 --rc geninfo_all_blocks=1 00:12:25.428 --rc geninfo_unexecuted_blocks=1 00:12:25.428 00:12:25.428 ' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.428 --rc genhtml_branch_coverage=1 00:12:25.428 --rc genhtml_function_coverage=1 00:12:25.428 --rc genhtml_legend=1 00:12:25.428 --rc geninfo_all_blocks=1 00:12:25.428 --rc geninfo_unexecuted_blocks=1 00:12:25.428 00:12:25.428 ' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.428 --rc genhtml_branch_coverage=1 00:12:25.428 --rc genhtml_function_coverage=1 00:12:25.428 --rc genhtml_legend=1 00:12:25.428 --rc geninfo_all_blocks=1 00:12:25.428 --rc geninfo_unexecuted_blocks=1 00:12:25.428 00:12:25.428 ' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.428 --rc genhtml_branch_coverage=1 00:12:25.428 --rc genhtml_function_coverage=1 00:12:25.428 --rc genhtml_legend=1 00:12:25.428 --rc geninfo_all_blocks=1 00:12:25.428 --rc geninfo_unexecuted_blocks=1 00:12:25.428 00:12:25.428 ' 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.428 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:25.429 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c7062277-f42e-48cb-b04a-4f77acef4270 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a095d8f2-5238-4641-8943-2004c3def6eb 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c2074957-5d39-4fa8-bb75-04c6ddee2938 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:12:25.429 11:35:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.998 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:31.999 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:31.999 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:31.999 Found net devices under 0000:18:00.0: mlx_0_0 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:31.999 Found net devices under 0000:18:00.1: mlx_0_1 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # get_rdma_if_list 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # rdma_devs=() 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@89 -- # continue 2 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@89 -- # continue 2 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@61 -- # uname 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_cm 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_core 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_umad 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe iw_cm 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:12:31.999 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # key_initiator=target1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:12:32.000 10.0.0.1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:12:32.000 10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:32.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:12:32.000 00:12:32.000 --- 10.0.0.2 ping statistics --- 00:12:32.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.000 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:32.000 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:32.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:12:32.001 00:12:32.001 --- 10.0.0.2 ping statistics --- 00:12:32.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.001 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=1604099 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 1604099 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1604099 ']' 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.001 11:35:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.001 [2024-11-20 11:35:35.018654] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:32.001 [2024-11-20 11:35:35.018713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.001 [2024-11-20 11:35:35.095485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.001 [2024-11-20 11:35:35.138136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.001 [2024-11-20 11:35:35.138183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.001 [2024-11-20 11:35:35.138193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.002 [2024-11-20 11:35:35.138202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.002 [2024-11-20 11:35:35.138209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.002 [2024-11-20 11:35:35.138678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.002 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:32.002 [2024-11-20 11:35:35.471805] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x550020/0x554510) succeed. 00:12:32.261 [2024-11-20 11:35:35.480734] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5514d0/0x595bb0) succeed. 00:12:32.261 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:32.261 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:32.261 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.519 Malloc1 00:12:32.519 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.519 Malloc2 00:12:32.519 11:35:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.777 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:33.036 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:12:33.036 [2024-11-20 11:35:36.498736] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:12:33.036 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:33.036 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c2074957-5d39-4fa8-bb75-04c6ddee2938 -a 10.0.0.2 -s 4420 -i 4 00:12:33.602 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.602 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:33.602 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.602 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:33.602 11:35:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.501 [ 0]:0x1 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efe3588970bb4ce8b1b299fc89133e11 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efe3588970bb4ce8b1b299fc89133e11 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.501 11:35:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.759 [ 0]:0x1 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efe3588970bb4ce8b1b299fc89133e11 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efe3588970bb4ce8b1b299fc89133e11 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.759 [ 1]:0x2 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.759 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:36.016 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:36.016 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:36.016 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:36.016 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.274 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.533 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:36.533 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:36.533 11:35:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c2074957-5d39-4fa8-bb75-04c6ddee2938 -a 10.0.0.2 -s 4420 -i 4 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:37.099 11:35:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.001 [ 0]:0x2 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.001 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.260 [ 0]:0x1 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.260 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efe3588970bb4ce8b1b299fc89133e11 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efe3588970bb4ce8b1b299fc89133e11 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.518 [ 1]:0x2 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:39.518 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.519 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.777 11:35:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.777 [ 0]:0x2 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:39.777 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.036 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.293 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:40.293 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c2074957-5d39-4fa8-bb75-04c6ddee2938 -a 10.0.0.2 -s 4420 -i 4 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:40.551 11:35:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:43.083 11:35:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.083 [ 0]:0x1 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efe3588970bb4ce8b1b299fc89133e11 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efe3588970bb4ce8b1b299fc89133e11 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.083 [ 1]:0x2 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.083 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.084 [ 0]:0x2 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:43.084 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:43.343 [2024-11-20 11:35:46.584516] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:43.343 request: 00:12:43.343 { 00:12:43.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.343 "nsid": 2, 00:12:43.343 "host": "nqn.2016-06.io.spdk:host1", 00:12:43.343 "method": "nvmf_ns_remove_host", 00:12:43.343 "req_id": 1 00:12:43.343 } 00:12:43.343 Got JSON-RPC error response 00:12:43.343 response: 00:12:43.343 { 00:12:43.343 "code": -32602, 00:12:43.343 "message": "Invalid parameters" 00:12:43.343 } 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.343 [ 0]:0x2 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4dd1d8e22d946d7ba378064d93ddb2b 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4dd1d8e22d946d7ba378064d93ddb2b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:43.343 11:35:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1605852 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1605852 /var/tmp/host.sock 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1605852 ']' 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:43.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.602 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:43.860 [2024-11-20 11:35:47.103851] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:43.860 [2024-11-20 11:35:47.103916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605852 ] 00:12:43.860 [2024-11-20 11:35:47.178897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.860 [2024-11-20 11:35:47.226217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.796 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.796 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:44.796 11:35:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.796 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c7062277-f42e-48cb-b04a-4f77acef4270 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C7062277F42E48CBB04A4F77ACEF4270 -i 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a095d8f2-5238-4641-8943-2004c3def6eb 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:45.054 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A095D8F25238464189432004C3DEF6EB -i 00:12:45.312 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.570 11:35:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:45.828 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:45.828 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:46.086 nvme0n1 00:12:46.086 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:46.086 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:46.344 nvme1n2 00:12:46.344 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:46.344 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:46.344 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:46.345 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:46.345 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:46.604 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:46.604 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:46.604 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:46.604 11:35:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:46.604 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c7062277-f42e-48cb-b04a-4f77acef4270 == \c\7\0\6\2\2\7\7\-\f\4\2\e\-\4\8\c\b\-\b\0\4\a\-\4\f\7\7\a\c\e\f\4\2\7\0 ]] 00:12:46.604 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:46.864 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:46.864 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:46.864 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a095d8f2-5238-4641-8943-2004c3def6eb == \a\0\9\5\d\8\f\2\-\5\2\3\8\-\4\6\4\1\-\8\9\4\3\-\2\0\0\4\c\3\d\e\f\6\e\b ]] 00:12:46.864 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.175 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c7062277-f42e-48cb-b04a-4f77acef4270 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C7062277F42E48CBB04A4F77ACEF4270 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C7062277F42E48CBB04A4F77ACEF4270 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.481 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C7062277F42E48CBB04A4F77ACEF4270 00:12:47.482 [2024-11-20 11:35:50.894893] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:47.482 [2024-11-20 11:35:50.894927] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:47.482 [2024-11-20 11:35:50.894938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.482 request: 00:12:47.482 { 00:12:47.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.482 "namespace": { 00:12:47.482 "bdev_name": "invalid", 00:12:47.482 "nsid": 1, 00:12:47.482 "nguid": "C7062277F42E48CBB04A4F77ACEF4270", 00:12:47.482 "no_auto_visible": false 00:12:47.482 }, 00:12:47.482 "method": "nvmf_subsystem_add_ns", 00:12:47.482 "req_id": 1 00:12:47.482 } 00:12:47.482 Got JSON-RPC error response 00:12:47.482 response: 00:12:47.482 { 00:12:47.482 "code": -32602, 00:12:47.482 "message": "Invalid parameters" 00:12:47.482 } 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c7062277-f42e-48cb-b04a-4f77acef4270 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:47.482 11:35:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C7062277F42E48CBB04A4F77ACEF4270 -i 00:12:47.741 11:35:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1605852 ']' 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605852' 00:12:50.273 killing process with pid 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1605852 00:12:50.273 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:12:50.533 rmmod nvme_rdma 00:12:50.533 rmmod nvme_fabrics 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 1604099 ']' 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 1604099 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1604099 ']' 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1604099 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.533 11:35:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604099 00:12:50.792 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.792 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.792 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604099' 00:12:50.792 killing process with pid 1604099 00:12:50.792 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1604099 00:12:50.792 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1604099 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@264 -- # local dev 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # return 0 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@284 -- # iptr 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-save 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-restore 00:12:51.052 00:12:51.052 real 0m25.798s 00:12:51.052 user 0m33.357s 00:12:51.052 sys 0m7.366s 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.052 ************************************ 00:12:51.052 END TEST nvmf_ns_masking 00:12:51.052 ************************************ 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.052 ************************************ 00:12:51.052 START TEST nvmf_nvme_cli 00:12:51.052 ************************************ 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:51.052 * Looking for test storage... 00:12:51.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.052 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.311 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.311 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.311 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.311 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.311 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.312 --rc genhtml_branch_coverage=1 00:12:51.312 --rc genhtml_function_coverage=1 00:12:51.312 --rc genhtml_legend=1 00:12:51.312 --rc geninfo_all_blocks=1 00:12:51.312 --rc geninfo_unexecuted_blocks=1 00:12:51.312 00:12:51.312 ' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.312 --rc genhtml_branch_coverage=1 00:12:51.312 --rc genhtml_function_coverage=1 00:12:51.312 --rc genhtml_legend=1 00:12:51.312 --rc geninfo_all_blocks=1 00:12:51.312 --rc geninfo_unexecuted_blocks=1 00:12:51.312 00:12:51.312 ' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.312 --rc genhtml_branch_coverage=1 00:12:51.312 --rc genhtml_function_coverage=1 00:12:51.312 --rc genhtml_legend=1 00:12:51.312 --rc geninfo_all_blocks=1 00:12:51.312 --rc geninfo_unexecuted_blocks=1 00:12:51.312 00:12:51.312 ' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.312 --rc genhtml_branch_coverage=1 00:12:51.312 --rc genhtml_function_coverage=1 00:12:51.312 --rc genhtml_legend=1 00:12:51.312 --rc geninfo_all_blocks=1 00:12:51.312 --rc geninfo_unexecuted_blocks=1 00:12:51.312 00:12:51.312 ' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:51.312 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:51.313 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:12:51.313 11:35:54 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:57.877 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:57.877 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:57.877 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:57.878 Found net devices under 0000:18:00.0: mlx_0_0 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:57.878 Found net devices under 0000:18:00.1: mlx_0_1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # get_rdma_if_list 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # rdma_devs=() 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@89 -- # continue 2 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@89 -- # continue 2 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@61 -- # uname 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_cm 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_core 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_umad 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe iw_cm 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@58 -- # key_initiator=target1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:12:57.878 10.0.0.1 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:12:57.878 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:57.879 10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:57.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:12:57.879 00:12:57.879 --- 10.0.0.2 ping statistics --- 00:12:57.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.879 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:57.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:12:57.879 00:12:57.879 --- 10.0.0.2 ping statistics --- 00:12:57.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.879 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:12:57.879 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=1609680 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 1609680 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1609680 ']' 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.880 11:36:00 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 [2024-11-20 11:36:00.823666] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:57.881 [2024-11-20 11:36:00.823720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.881 [2024-11-20 11:36:00.903824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.881 [2024-11-20 11:36:00.953366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.881 [2024-11-20 11:36:00.953409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.881 [2024-11-20 11:36:00.953419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.881 [2024-11-20 11:36:00.953428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.881 [2024-11-20 11:36:00.953435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.881 [2024-11-20 11:36:00.954879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.881 [2024-11-20 11:36:00.954965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.881 [2024-11-20 11:36:00.955162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.881 [2024-11-20 11:36:00.955165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 [2024-11-20 11:36:01.134912] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1613220/0x1617710) succeed. 00:12:57.881 [2024-11-20 11:36:01.144194] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16148b0/0x1658db0) succeed. 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 Malloc0 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 Malloc1 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.881 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.140 [2024-11-20 11:36:01.369097] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -a 10.0.0.2 -s 4420 00:12:58.140 00:12:58.140 Discovery Log Number of Records 2, Generation counter 2 00:12:58.140 =====Discovery Log Entry 0====== 00:12:58.140 trtype: rdma 00:12:58.140 adrfam: ipv4 00:12:58.140 subtype: current discovery subsystem 00:12:58.140 treq: not required 00:12:58.140 portid: 0 00:12:58.140 trsvcid: 4420 00:12:58.140 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:58.140 traddr: 10.0.0.2 00:12:58.140 eflags: explicit discovery connections, duplicate discovery information 00:12:58.140 rdma_prtype: not specified 00:12:58.140 rdma_qptype: connected 00:12:58.140 rdma_cms: rdma-cm 00:12:58.140 rdma_pkey: 0x0000 00:12:58.140 =====Discovery Log Entry 1====== 00:12:58.140 trtype: rdma 00:12:58.140 adrfam: ipv4 00:12:58.140 subtype: nvme subsystem 00:12:58.140 treq: not required 00:12:58.140 portid: 0 00:12:58.140 trsvcid: 4420 00:12:58.140 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:58.140 traddr: 10.0.0.2 00:12:58.140 eflags: none 00:12:58.140 rdma_prtype: not specified 00:12:58.140 rdma_qptype: connected 00:12:58.140 rdma_cms: rdma-cm 00:12:58.140 rdma_pkey: 0x0000 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:58.140 11:36:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:59.074 11:36:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:01.607 /dev/nvme0n2 ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:01.607 11:36:04 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:13:02.174 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:13:02.175 rmmod nvme_rdma 00:13:02.175 rmmod nvme_fabrics 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 1609680 ']' 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 1609680 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1609680 ']' 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1609680 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.175 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609680 00:13:02.433 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.433 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.433 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609680' 00:13:02.433 killing process with pid 1609680 00:13:02.433 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1609680 00:13:02.433 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1609680 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@264 -- # local dev 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # return 0 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:13:02.693 11:36:05 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@284 -- # iptr 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-save 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-restore 00:13:02.693 00:13:02.693 real 0m11.621s 00:13:02.693 user 0m21.770s 00:13:02.693 sys 0m5.296s 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:02.693 ************************************ 00:13:02.693 END TEST nvmf_nvme_cli 00:13:02.693 ************************************ 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.693 ************************************ 00:13:02.693 START TEST nvmf_auth_target 00:13:02.693 ************************************ 00:13:02.693 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:13:02.952 * Looking for test storage... 00:13:02.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:02.952 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.953 --rc genhtml_branch_coverage=1 00:13:02.953 --rc genhtml_function_coverage=1 00:13:02.953 --rc genhtml_legend=1 00:13:02.953 --rc geninfo_all_blocks=1 00:13:02.953 --rc geninfo_unexecuted_blocks=1 00:13:02.953 00:13:02.953 ' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.953 --rc genhtml_branch_coverage=1 00:13:02.953 --rc genhtml_function_coverage=1 00:13:02.953 --rc genhtml_legend=1 00:13:02.953 --rc geninfo_all_blocks=1 00:13:02.953 --rc geninfo_unexecuted_blocks=1 00:13:02.953 00:13:02.953 ' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.953 --rc genhtml_branch_coverage=1 00:13:02.953 --rc genhtml_function_coverage=1 00:13:02.953 --rc genhtml_legend=1 00:13:02.953 --rc geninfo_all_blocks=1 00:13:02.953 --rc geninfo_unexecuted_blocks=1 00:13:02.953 00:13:02.953 ' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.953 --rc genhtml_branch_coverage=1 00:13:02.953 --rc genhtml_function_coverage=1 00:13:02.953 --rc genhtml_legend=1 00:13:02.953 --rc geninfo_all_blocks=1 00:13:02.953 --rc geninfo_unexecuted_blocks=1 00:13:02.953 00:13:02.953 ' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.953 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:02.954 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:13:02.954 11:36:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:13:09.518 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:09.518 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:13:09.518 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:09.519 Found net devices under 0000:18:00.0: mlx_0_0 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:09.519 Found net devices under 0000:18:00.1: mlx_0_1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # get_rdma_if_list 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # rdma_devs=() 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@89 -- # continue 2 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@89 -- # continue 2 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@61 -- # uname 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_cm 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_core 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_umad 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe iw_cm 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # key_initiator=target1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:13:09.519 10.0.0.1 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:09.519 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:13:09.520 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:09.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:13:09.520 00:13:09.520 --- 10.0.0.2 ping statistics --- 00:13:09.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.520 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:09.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:13:09.520 00:13:09.520 --- 10.0.0.2 ping statistics --- 00:13:09.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.520 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.520 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1613798 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1613798 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1613798 ']' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1613817 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a47af30097f23ee40e047ead0c6f4f32cabfc2157f8aeb44 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.H6z 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a47af30097f23ee40e047ead0c6f4f32cabfc2157f8aeb44 0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a47af30097f23ee40e047ead0c6f4f32cabfc2157f8aeb44 0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a47af30097f23ee40e047ead0c6f4f32cabfc2157f8aeb44 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.H6z 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.H6z 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.H6z 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:13:09.521 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:13:09.780 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:09.780 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=12a1aa14f0f28f61fe32bfda5450283267ae07a80121ce9753a766c787f79754 00:13:09.780 11:36:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.oTx 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 12a1aa14f0f28f61fe32bfda5450283267ae07a80121ce9753a766c787f79754 3 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 12a1aa14f0f28f61fe32bfda5450283267ae07a80121ce9753a766c787f79754 3 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=12a1aa14f0f28f61fe32bfda5450283267ae07a80121ce9753a766c787f79754 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.oTx 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.oTx 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oTx 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=9c9e284b8db67eb0514aa73395614b37 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.YxO 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 9c9e284b8db67eb0514aa73395614b37 1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 9c9e284b8db67eb0514aa73395614b37 1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=9c9e284b8db67eb0514aa73395614b37 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.YxO 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.YxO 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YxO 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=2b56c16cf7a81a91597d47ae94a0eb6a4db7e0538df5c95c 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.wZh 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 2b56c16cf7a81a91597d47ae94a0eb6a4db7e0538df5c95c 2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 2b56c16cf7a81a91597d47ae94a0eb6a4db7e0538df5c95c 2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=2b56c16cf7a81a91597d47ae94a0eb6a4db7e0538df5c95c 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.wZh 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.wZh 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.wZh 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=1aacf063ed26b69fd778ced05af7b09bb34a800758ed7c7c 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.KXF 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 1aacf063ed26b69fd778ced05af7b09bb34a800758ed7c7c 2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 1aacf063ed26b69fd778ced05af7b09bb34a800758ed7c7c 2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=1aacf063ed26b69fd778ced05af7b09bb34a800758ed7c7c 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:09.780 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.KXF 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.KXF 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.KXF 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=6adfc2ff7988314ea985d67ae1d1a156 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.jnX 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 6adfc2ff7988314ea985d67ae1d1a156 1 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 6adfc2ff7988314ea985d67ae1d1a156 1 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=6adfc2ff7988314ea985d67ae1d1a156 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.jnX 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.jnX 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.jnX 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.039 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=79f2b794e280a5c96935e4199ed879c5e836e023c22b29b630a2d9358c7ecff1 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.p22 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 79f2b794e280a5c96935e4199ed879c5e836e023c22b29b630a2d9358c7ecff1 3 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 79f2b794e280a5c96935e4199ed879c5e836e023c22b29b630a2d9358c7ecff1 3 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=79f2b794e280a5c96935e4199ed879c5e836e023c22b29b630a2d9358c7ecff1 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.p22 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.p22 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.p22 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1613798 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1613798 ']' 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.040 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1613817 /var/tmp/host.sock 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1613817 ']' 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.298 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:10.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:10.299 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.299 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.557 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.557 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H6z 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.H6z 00:13:10.558 11:36:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.H6z 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oTx ]] 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oTx 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oTx 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oTx 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YxO 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YxO 00:13:10.815 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YxO 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.wZh ]] 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wZh 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wZh 00:13:11.074 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wZh 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KXF 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KXF 00:13:11.333 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KXF 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.jnX ]] 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jnX 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jnX 00:13:11.592 11:36:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jnX 00:13:11.592 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:11.592 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.p22 00:13:11.592 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.592 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.p22 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.p22 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:11.851 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.110 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.369 00:13:12.370 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.370 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.370 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.629 { 00:13:12.629 "cntlid": 1, 00:13:12.629 "qid": 0, 00:13:12.629 "state": "enabled", 00:13:12.629 "thread": "nvmf_tgt_poll_group_000", 00:13:12.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:12.629 "listen_address": { 00:13:12.629 "trtype": "RDMA", 00:13:12.629 "adrfam": "IPv4", 00:13:12.629 "traddr": "10.0.0.2", 00:13:12.629 "trsvcid": "4420" 00:13:12.629 }, 00:13:12.629 "peer_address": { 00:13:12.629 "trtype": "RDMA", 00:13:12.629 "adrfam": "IPv4", 00:13:12.629 "traddr": "10.0.0.2", 00:13:12.629 "trsvcid": "60434" 00:13:12.629 }, 00:13:12.629 "auth": { 00:13:12.629 "state": "completed", 00:13:12.629 "digest": "sha256", 00:13:12.629 "dhgroup": "null" 00:13:12.629 } 00:13:12.629 } 00:13:12.629 ]' 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.629 11:36:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.629 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:12.629 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.629 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.629 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.629 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.888 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:12.888 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:13.457 11:36:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:13.716 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.975 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.235 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.235 { 00:13:14.235 "cntlid": 3, 00:13:14.235 "qid": 0, 00:13:14.235 "state": "enabled", 00:13:14.235 "thread": "nvmf_tgt_poll_group_000", 00:13:14.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:14.235 "listen_address": { 00:13:14.235 "trtype": "RDMA", 00:13:14.235 "adrfam": "IPv4", 00:13:14.235 "traddr": "10.0.0.2", 00:13:14.235 "trsvcid": "4420" 00:13:14.235 }, 00:13:14.235 "peer_address": { 00:13:14.235 "trtype": "RDMA", 00:13:14.235 "adrfam": "IPv4", 00:13:14.235 "traddr": "10.0.0.2", 00:13:14.235 "trsvcid": "54007" 00:13:14.235 }, 00:13:14.235 "auth": { 00:13:14.235 "state": "completed", 00:13:14.235 "digest": "sha256", 00:13:14.235 "dhgroup": "null" 00:13:14.235 } 00:13:14.235 } 00:13:14.235 ]' 00:13:14.235 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.494 11:36:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.753 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:14.753 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:15.322 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.322 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:15.322 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.322 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.581 11:36:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.582 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.582 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.582 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.582 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.841 00:13:15.841 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.841 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.841 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.100 { 00:13:16.100 "cntlid": 5, 00:13:16.100 "qid": 0, 00:13:16.100 "state": "enabled", 00:13:16.100 "thread": "nvmf_tgt_poll_group_000", 00:13:16.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:16.100 "listen_address": { 00:13:16.100 "trtype": "RDMA", 00:13:16.100 "adrfam": "IPv4", 00:13:16.100 "traddr": "10.0.0.2", 00:13:16.100 "trsvcid": "4420" 00:13:16.100 }, 00:13:16.100 "peer_address": { 00:13:16.100 "trtype": "RDMA", 00:13:16.100 "adrfam": "IPv4", 00:13:16.100 "traddr": "10.0.0.2", 00:13:16.100 "trsvcid": "50977" 00:13:16.100 }, 00:13:16.100 "auth": { 00:13:16.100 "state": "completed", 00:13:16.100 "digest": "sha256", 00:13:16.100 "dhgroup": "null" 00:13:16.100 } 00:13:16.100 } 00:13:16.100 ]' 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.100 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.360 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:16.360 11:36:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:16.927 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.191 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.450 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.708 00:13:17.709 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.709 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.709 11:36:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.709 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.709 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.709 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.709 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.967 { 00:13:17.967 "cntlid": 7, 00:13:17.967 "qid": 0, 00:13:17.967 "state": "enabled", 00:13:17.967 "thread": "nvmf_tgt_poll_group_000", 00:13:17.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:17.967 "listen_address": { 00:13:17.967 "trtype": "RDMA", 00:13:17.967 "adrfam": "IPv4", 00:13:17.967 "traddr": "10.0.0.2", 00:13:17.967 "trsvcid": "4420" 00:13:17.967 }, 00:13:17.967 "peer_address": { 00:13:17.967 "trtype": "RDMA", 00:13:17.967 "adrfam": "IPv4", 00:13:17.967 "traddr": "10.0.0.2", 00:13:17.967 "trsvcid": "34620" 00:13:17.967 }, 00:13:17.967 "auth": { 00:13:17.967 "state": "completed", 00:13:17.967 "digest": "sha256", 00:13:17.967 "dhgroup": "null" 00:13:17.967 } 00:13:17.967 } 00:13:17.967 ]' 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.967 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.225 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:18.225 11:36:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:18.791 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.791 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:18.791 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.050 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.051 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.309 00:13:19.309 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.309 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.309 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.568 { 00:13:19.568 "cntlid": 9, 00:13:19.568 "qid": 0, 00:13:19.568 "state": "enabled", 00:13:19.568 "thread": "nvmf_tgt_poll_group_000", 00:13:19.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:19.568 "listen_address": { 00:13:19.568 "trtype": "RDMA", 00:13:19.568 "adrfam": "IPv4", 00:13:19.568 "traddr": "10.0.0.2", 00:13:19.568 "trsvcid": "4420" 00:13:19.568 }, 00:13:19.568 "peer_address": { 00:13:19.568 "trtype": "RDMA", 00:13:19.568 "adrfam": "IPv4", 00:13:19.568 "traddr": "10.0.0.2", 00:13:19.568 "trsvcid": "54719" 00:13:19.568 }, 00:13:19.568 "auth": { 00:13:19.568 "state": "completed", 00:13:19.568 "digest": "sha256", 00:13:19.568 "dhgroup": "ffdhe2048" 00:13:19.568 } 00:13:19.568 } 00:13:19.568 ]' 00:13:19.568 11:36:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.568 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.568 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.826 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.826 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.826 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.826 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.826 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.084 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:20.085 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:20.652 11:36:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.652 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.910 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.168 00:13:21.168 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.168 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.168 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.427 { 00:13:21.427 "cntlid": 11, 00:13:21.427 "qid": 0, 00:13:21.427 "state": "enabled", 00:13:21.427 "thread": "nvmf_tgt_poll_group_000", 00:13:21.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:21.427 "listen_address": { 00:13:21.427 "trtype": "RDMA", 00:13:21.427 "adrfam": "IPv4", 00:13:21.427 "traddr": "10.0.0.2", 00:13:21.427 "trsvcid": "4420" 00:13:21.427 }, 00:13:21.427 "peer_address": { 00:13:21.427 "trtype": "RDMA", 00:13:21.427 "adrfam": "IPv4", 00:13:21.427 "traddr": "10.0.0.2", 00:13:21.427 "trsvcid": "59289" 00:13:21.427 }, 00:13:21.427 "auth": { 00:13:21.427 "state": "completed", 00:13:21.427 "digest": "sha256", 00:13:21.427 "dhgroup": "ffdhe2048" 00:13:21.427 } 00:13:21.427 } 00:13:21.427 ]' 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.427 11:36:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.686 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:21.686 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:22.253 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.512 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.770 11:36:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.770 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.029 { 00:13:23.029 "cntlid": 13, 00:13:23.029 "qid": 0, 00:13:23.029 "state": "enabled", 00:13:23.029 "thread": "nvmf_tgt_poll_group_000", 00:13:23.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:23.029 "listen_address": { 00:13:23.029 "trtype": "RDMA", 00:13:23.029 "adrfam": "IPv4", 00:13:23.029 "traddr": "10.0.0.2", 00:13:23.029 "trsvcid": "4420" 00:13:23.029 }, 00:13:23.029 "peer_address": { 00:13:23.029 "trtype": "RDMA", 00:13:23.029 "adrfam": "IPv4", 00:13:23.029 "traddr": "10.0.0.2", 00:13:23.029 "trsvcid": "60320" 00:13:23.029 }, 00:13:23.029 "auth": { 00:13:23.029 "state": "completed", 00:13:23.029 "digest": "sha256", 00:13:23.029 "dhgroup": "ffdhe2048" 00:13:23.029 } 00:13:23.029 } 00:13:23.029 ]' 00:13:23.029 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.288 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.546 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:23.547 11:36:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:24.112 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.370 11:36:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.628 00:13:24.628 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.628 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.628 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.945 { 00:13:24.945 "cntlid": 15, 00:13:24.945 "qid": 0, 00:13:24.945 "state": "enabled", 00:13:24.945 "thread": "nvmf_tgt_poll_group_000", 00:13:24.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:24.945 "listen_address": { 00:13:24.945 "trtype": "RDMA", 00:13:24.945 "adrfam": "IPv4", 00:13:24.945 "traddr": "10.0.0.2", 00:13:24.945 "trsvcid": "4420" 00:13:24.945 }, 00:13:24.945 "peer_address": { 00:13:24.945 "trtype": "RDMA", 00:13:24.945 "adrfam": "IPv4", 00:13:24.945 "traddr": "10.0.0.2", 00:13:24.945 "trsvcid": "33615" 00:13:24.945 }, 00:13:24.945 "auth": { 00:13:24.945 "state": "completed", 00:13:24.945 "digest": "sha256", 00:13:24.945 "dhgroup": "ffdhe2048" 00:13:24.945 } 00:13:24.945 } 00:13:24.945 ]' 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.945 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.266 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:25.267 11:36:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:25.835 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.095 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.354 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.612 00:13:26.612 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.612 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.612 11:36:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.612 { 00:13:26.612 "cntlid": 17, 00:13:26.612 "qid": 0, 00:13:26.612 "state": "enabled", 00:13:26.612 "thread": "nvmf_tgt_poll_group_000", 00:13:26.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:26.612 "listen_address": { 00:13:26.612 "trtype": "RDMA", 00:13:26.612 "adrfam": "IPv4", 00:13:26.612 "traddr": "10.0.0.2", 00:13:26.612 "trsvcid": "4420" 00:13:26.612 }, 00:13:26.612 "peer_address": { 00:13:26.612 "trtype": "RDMA", 00:13:26.612 "adrfam": "IPv4", 00:13:26.612 "traddr": "10.0.0.2", 00:13:26.612 "trsvcid": "58633" 00:13:26.612 }, 00:13:26.612 "auth": { 00:13:26.612 "state": "completed", 00:13:26.612 "digest": "sha256", 00:13:26.612 "dhgroup": "ffdhe3072" 00:13:26.612 } 00:13:26.612 } 00:13:26.612 ]' 00:13:26.612 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.870 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.128 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:27.128 11:36:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.694 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.952 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.210 00:13:28.210 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.210 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.210 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.469 { 00:13:28.469 "cntlid": 19, 00:13:28.469 "qid": 0, 00:13:28.469 "state": "enabled", 00:13:28.469 "thread": "nvmf_tgt_poll_group_000", 00:13:28.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:28.469 "listen_address": { 00:13:28.469 "trtype": "RDMA", 00:13:28.469 "adrfam": "IPv4", 00:13:28.469 "traddr": "10.0.0.2", 00:13:28.469 "trsvcid": "4420" 00:13:28.469 }, 00:13:28.469 "peer_address": { 00:13:28.469 "trtype": "RDMA", 00:13:28.469 "adrfam": "IPv4", 00:13:28.469 "traddr": "10.0.0.2", 00:13:28.469 "trsvcid": "46322" 00:13:28.469 }, 00:13:28.469 "auth": { 00:13:28.469 "state": "completed", 00:13:28.469 "digest": "sha256", 00:13:28.469 "dhgroup": "ffdhe3072" 00:13:28.469 } 00:13:28.469 } 00:13:28.469 ]' 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.469 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.728 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.728 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.728 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.728 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.728 11:36:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.728 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:28.728 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.663 11:36:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.922 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.181 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.181 { 00:13:30.181 "cntlid": 21, 00:13:30.181 "qid": 0, 00:13:30.181 "state": "enabled", 00:13:30.181 "thread": "nvmf_tgt_poll_group_000", 00:13:30.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:30.181 "listen_address": { 00:13:30.181 "trtype": "RDMA", 00:13:30.181 "adrfam": "IPv4", 00:13:30.181 "traddr": "10.0.0.2", 00:13:30.181 "trsvcid": "4420" 00:13:30.181 }, 00:13:30.181 "peer_address": { 00:13:30.181 "trtype": "RDMA", 00:13:30.181 "adrfam": "IPv4", 00:13:30.181 "traddr": "10.0.0.2", 00:13:30.181 "trsvcid": "33847" 00:13:30.181 }, 00:13:30.181 "auth": { 00:13:30.181 "state": "completed", 00:13:30.181 "digest": "sha256", 00:13:30.181 "dhgroup": "ffdhe3072" 00:13:30.181 } 00:13:30.181 } 00:13:30.181 ]' 00:13:30.181 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.439 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.698 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:30.698 11:36:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.263 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.520 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:31.520 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.520 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.520 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.521 11:36:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.778 00:13:31.778 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.778 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.778 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.037 { 00:13:32.037 "cntlid": 23, 00:13:32.037 "qid": 0, 00:13:32.037 "state": "enabled", 00:13:32.037 "thread": "nvmf_tgt_poll_group_000", 00:13:32.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:32.037 "listen_address": { 00:13:32.037 "trtype": "RDMA", 00:13:32.037 "adrfam": "IPv4", 00:13:32.037 "traddr": "10.0.0.2", 00:13:32.037 "trsvcid": "4420" 00:13:32.037 }, 00:13:32.037 "peer_address": { 00:13:32.037 "trtype": "RDMA", 00:13:32.037 "adrfam": "IPv4", 00:13:32.037 "traddr": "10.0.0.2", 00:13:32.037 "trsvcid": "36632" 00:13:32.037 }, 00:13:32.037 "auth": { 00:13:32.037 "state": "completed", 00:13:32.037 "digest": "sha256", 00:13:32.037 "dhgroup": "ffdhe3072" 00:13:32.037 } 00:13:32.037 } 00:13:32.037 ]' 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.037 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.295 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.295 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.295 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.295 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:32.295 11:36:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.230 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.488 11:36:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.746 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.746 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.004 { 00:13:34.004 "cntlid": 25, 00:13:34.004 "qid": 0, 00:13:34.004 "state": "enabled", 00:13:34.004 "thread": "nvmf_tgt_poll_group_000", 00:13:34.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:34.004 "listen_address": { 00:13:34.004 "trtype": "RDMA", 00:13:34.004 "adrfam": "IPv4", 00:13:34.004 "traddr": "10.0.0.2", 00:13:34.004 "trsvcid": "4420" 00:13:34.004 }, 00:13:34.004 "peer_address": { 00:13:34.004 "trtype": "RDMA", 00:13:34.004 "adrfam": "IPv4", 00:13:34.004 "traddr": "10.0.0.2", 00:13:34.004 "trsvcid": "57193" 00:13:34.004 }, 00:13:34.004 "auth": { 00:13:34.004 "state": "completed", 00:13:34.004 "digest": "sha256", 00:13:34.004 "dhgroup": "ffdhe4096" 00:13:34.004 } 00:13:34.004 } 00:13:34.004 ]' 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.004 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.005 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.262 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:34.262 11:36:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:34.829 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.089 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.347 00:13:35.347 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.347 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.347 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.607 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.607 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.607 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.607 11:36:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.607 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.607 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.607 { 00:13:35.607 "cntlid": 27, 00:13:35.607 "qid": 0, 00:13:35.607 "state": "enabled", 00:13:35.607 "thread": "nvmf_tgt_poll_group_000", 00:13:35.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:35.607 "listen_address": { 00:13:35.607 "trtype": "RDMA", 00:13:35.607 "adrfam": "IPv4", 00:13:35.607 "traddr": "10.0.0.2", 00:13:35.607 "trsvcid": "4420" 00:13:35.607 }, 00:13:35.607 "peer_address": { 00:13:35.607 "trtype": "RDMA", 00:13:35.607 "adrfam": "IPv4", 00:13:35.607 "traddr": "10.0.0.2", 00:13:35.607 "trsvcid": "37241" 00:13:35.607 }, 00:13:35.607 "auth": { 00:13:35.607 "state": "completed", 00:13:35.607 "digest": "sha256", 00:13:35.607 "dhgroup": "ffdhe4096" 00:13:35.607 } 00:13:35.607 } 00:13:35.607 ]' 00:13:35.607 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.607 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.607 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.866 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:35.866 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.866 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.866 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.866 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.125 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:36.125 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:36.692 11:36:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.692 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.951 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.210 00:13:37.210 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.210 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.210 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.469 { 00:13:37.469 "cntlid": 29, 00:13:37.469 "qid": 0, 00:13:37.469 "state": "enabled", 00:13:37.469 "thread": "nvmf_tgt_poll_group_000", 00:13:37.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:37.469 "listen_address": { 00:13:37.469 "trtype": "RDMA", 00:13:37.469 "adrfam": "IPv4", 00:13:37.469 "traddr": "10.0.0.2", 00:13:37.469 "trsvcid": "4420" 00:13:37.469 }, 00:13:37.469 "peer_address": { 00:13:37.469 "trtype": "RDMA", 00:13:37.469 "adrfam": "IPv4", 00:13:37.469 "traddr": "10.0.0.2", 00:13:37.469 "trsvcid": "53618" 00:13:37.469 }, 00:13:37.469 "auth": { 00:13:37.469 "state": "completed", 00:13:37.469 "digest": "sha256", 00:13:37.469 "dhgroup": "ffdhe4096" 00:13:37.469 } 00:13:37.469 } 00:13:37.469 ]' 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.469 11:36:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.728 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:37.728 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:38.300 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:38.559 11:36:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.817 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.818 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.818 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.818 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.075 00:13:39.075 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.075 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.075 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.334 { 00:13:39.334 "cntlid": 31, 00:13:39.334 "qid": 0, 00:13:39.334 "state": "enabled", 00:13:39.334 "thread": "nvmf_tgt_poll_group_000", 00:13:39.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:39.334 "listen_address": { 00:13:39.334 "trtype": "RDMA", 00:13:39.334 "adrfam": "IPv4", 00:13:39.334 "traddr": "10.0.0.2", 00:13:39.334 "trsvcid": "4420" 00:13:39.334 }, 00:13:39.334 "peer_address": { 00:13:39.334 "trtype": "RDMA", 00:13:39.334 "adrfam": "IPv4", 00:13:39.334 "traddr": "10.0.0.2", 00:13:39.334 "trsvcid": "54708" 00:13:39.334 }, 00:13:39.334 "auth": { 00:13:39.334 "state": "completed", 00:13:39.334 "digest": "sha256", 00:13:39.334 "dhgroup": "ffdhe4096" 00:13:39.334 } 00:13:39.334 } 00:13:39.334 ]' 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.334 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.592 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:39.592 11:36:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:40.159 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.418 11:36:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.985 00:13:40.985 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.985 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.986 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.986 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.986 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.986 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.986 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.244 { 00:13:41.244 "cntlid": 33, 00:13:41.244 "qid": 0, 00:13:41.244 "state": "enabled", 00:13:41.244 "thread": "nvmf_tgt_poll_group_000", 00:13:41.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:41.244 "listen_address": { 00:13:41.244 "trtype": "RDMA", 00:13:41.244 "adrfam": "IPv4", 00:13:41.244 "traddr": "10.0.0.2", 00:13:41.244 "trsvcid": "4420" 00:13:41.244 }, 00:13:41.244 "peer_address": { 00:13:41.244 "trtype": "RDMA", 00:13:41.244 "adrfam": "IPv4", 00:13:41.244 "traddr": "10.0.0.2", 00:13:41.244 "trsvcid": "51692" 00:13:41.244 }, 00:13:41.244 "auth": { 00:13:41.244 "state": "completed", 00:13:41.244 "digest": "sha256", 00:13:41.244 "dhgroup": "ffdhe6144" 00:13:41.244 } 00:13:41.244 } 00:13:41.244 ]' 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.244 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.503 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:41.503 11:36:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:42.071 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.071 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:42.071 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.071 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.330 11:36:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.897 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.897 { 00:13:42.897 "cntlid": 35, 00:13:42.897 "qid": 0, 00:13:42.897 "state": "enabled", 00:13:42.897 "thread": "nvmf_tgt_poll_group_000", 00:13:42.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:42.897 "listen_address": { 00:13:42.897 "trtype": "RDMA", 00:13:42.897 "adrfam": "IPv4", 00:13:42.897 "traddr": "10.0.0.2", 00:13:42.897 "trsvcid": "4420" 00:13:42.897 }, 00:13:42.897 "peer_address": { 00:13:42.897 "trtype": "RDMA", 00:13:42.897 "adrfam": "IPv4", 00:13:42.897 "traddr": "10.0.0.2", 00:13:42.897 "trsvcid": "34308" 00:13:42.897 }, 00:13:42.897 "auth": { 00:13:42.897 "state": "completed", 00:13:42.897 "digest": "sha256", 00:13:42.897 "dhgroup": "ffdhe6144" 00:13:42.897 } 00:13:42.897 } 00:13:42.897 ]' 00:13:42.897 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.156 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.415 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:43.415 11:36:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.982 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.241 11:36:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.809 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.809 { 00:13:44.809 "cntlid": 37, 00:13:44.809 "qid": 0, 00:13:44.809 "state": "enabled", 00:13:44.809 "thread": "nvmf_tgt_poll_group_000", 00:13:44.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:44.809 "listen_address": { 00:13:44.809 "trtype": "RDMA", 00:13:44.809 "adrfam": "IPv4", 00:13:44.809 "traddr": "10.0.0.2", 00:13:44.809 "trsvcid": "4420" 00:13:44.809 }, 00:13:44.809 "peer_address": { 00:13:44.809 "trtype": "RDMA", 00:13:44.809 "adrfam": "IPv4", 00:13:44.809 "traddr": "10.0.0.2", 00:13:44.809 "trsvcid": "45353" 00:13:44.809 }, 00:13:44.809 "auth": { 00:13:44.809 "state": "completed", 00:13:44.809 "digest": "sha256", 00:13:44.809 "dhgroup": "ffdhe6144" 00:13:44.809 } 00:13:44.809 } 00:13:44.809 ]' 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.809 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.068 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:45.068 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.068 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.068 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.068 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.327 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:45.327 11:36:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:45.894 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.152 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.408 00:13:46.408 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.408 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.408 11:36:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.665 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.665 { 00:13:46.665 "cntlid": 39, 00:13:46.665 "qid": 0, 00:13:46.665 "state": "enabled", 00:13:46.665 "thread": "nvmf_tgt_poll_group_000", 00:13:46.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:46.665 "listen_address": { 00:13:46.665 "trtype": "RDMA", 00:13:46.665 "adrfam": "IPv4", 00:13:46.666 "traddr": "10.0.0.2", 00:13:46.666 "trsvcid": "4420" 00:13:46.666 }, 00:13:46.666 "peer_address": { 00:13:46.666 "trtype": "RDMA", 00:13:46.666 "adrfam": "IPv4", 00:13:46.666 "traddr": "10.0.0.2", 00:13:46.666 "trsvcid": "36669" 00:13:46.666 }, 00:13:46.666 "auth": { 00:13:46.666 "state": "completed", 00:13:46.666 "digest": "sha256", 00:13:46.666 "dhgroup": "ffdhe6144" 00:13:46.666 } 00:13:46.666 } 00:13:46.666 ]' 00:13:46.666 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.666 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.666 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.924 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:46.924 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.924 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.924 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.924 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.183 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:47.183 11:36:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:47.751 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.010 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.576 00:13:48.576 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.576 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.576 11:36:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.834 { 00:13:48.834 "cntlid": 41, 00:13:48.834 "qid": 0, 00:13:48.834 "state": "enabled", 00:13:48.834 "thread": "nvmf_tgt_poll_group_000", 00:13:48.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:48.834 "listen_address": { 00:13:48.834 "trtype": "RDMA", 00:13:48.834 "adrfam": "IPv4", 00:13:48.834 "traddr": "10.0.0.2", 00:13:48.834 "trsvcid": "4420" 00:13:48.834 }, 00:13:48.834 "peer_address": { 00:13:48.834 "trtype": "RDMA", 00:13:48.834 "adrfam": "IPv4", 00:13:48.834 "traddr": "10.0.0.2", 00:13:48.834 "trsvcid": "57869" 00:13:48.834 }, 00:13:48.834 "auth": { 00:13:48.834 "state": "completed", 00:13:48.834 "digest": "sha256", 00:13:48.834 "dhgroup": "ffdhe8192" 00:13:48.834 } 00:13:48.834 } 00:13:48.834 ]' 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.834 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.092 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:49.092 11:36:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.658 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:49.659 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.918 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.485 00:13:50.485 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.485 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.485 11:36:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.744 { 00:13:50.744 "cntlid": 43, 00:13:50.744 "qid": 0, 00:13:50.744 "state": "enabled", 00:13:50.744 "thread": "nvmf_tgt_poll_group_000", 00:13:50.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:50.744 "listen_address": { 00:13:50.744 "trtype": "RDMA", 00:13:50.744 "adrfam": "IPv4", 00:13:50.744 "traddr": "10.0.0.2", 00:13:50.744 "trsvcid": "4420" 00:13:50.744 }, 00:13:50.744 "peer_address": { 00:13:50.744 "trtype": "RDMA", 00:13:50.744 "adrfam": "IPv4", 00:13:50.744 "traddr": "10.0.0.2", 00:13:50.744 "trsvcid": "39101" 00:13:50.744 }, 00:13:50.744 "auth": { 00:13:50.744 "state": "completed", 00:13:50.744 "digest": "sha256", 00:13:50.744 "dhgroup": "ffdhe8192" 00:13:50.744 } 00:13:50.744 } 00:13:50.744 ]' 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.744 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.003 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:51.003 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:51.568 11:36:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.827 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.086 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.344 00:13:52.344 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.344 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.344 11:36:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.603 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.604 { 00:13:52.604 "cntlid": 45, 00:13:52.604 "qid": 0, 00:13:52.604 "state": "enabled", 00:13:52.604 "thread": "nvmf_tgt_poll_group_000", 00:13:52.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:52.604 "listen_address": { 00:13:52.604 "trtype": "RDMA", 00:13:52.604 "adrfam": "IPv4", 00:13:52.604 "traddr": "10.0.0.2", 00:13:52.604 "trsvcid": "4420" 00:13:52.604 }, 00:13:52.604 "peer_address": { 00:13:52.604 "trtype": "RDMA", 00:13:52.604 "adrfam": "IPv4", 00:13:52.604 "traddr": "10.0.0.2", 00:13:52.604 "trsvcid": "36420" 00:13:52.604 }, 00:13:52.604 "auth": { 00:13:52.604 "state": "completed", 00:13:52.604 "digest": "sha256", 00:13:52.604 "dhgroup": "ffdhe8192" 00:13:52.604 } 00:13:52.604 } 00:13:52.604 ]' 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.604 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.865 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.125 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:53.125 11:36:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.767 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.041 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.606 00:13:54.606 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.606 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.606 11:36:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.866 { 00:13:54.866 "cntlid": 47, 00:13:54.866 "qid": 0, 00:13:54.866 "state": "enabled", 00:13:54.866 "thread": "nvmf_tgt_poll_group_000", 00:13:54.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:54.866 "listen_address": { 00:13:54.866 "trtype": "RDMA", 00:13:54.866 "adrfam": "IPv4", 00:13:54.866 "traddr": "10.0.0.2", 00:13:54.866 "trsvcid": "4420" 00:13:54.866 }, 00:13:54.866 "peer_address": { 00:13:54.866 "trtype": "RDMA", 00:13:54.866 "adrfam": "IPv4", 00:13:54.866 "traddr": "10.0.0.2", 00:13:54.866 "trsvcid": "37671" 00:13:54.866 }, 00:13:54.866 "auth": { 00:13:54.866 "state": "completed", 00:13:54.866 "digest": "sha256", 00:13:54.866 "dhgroup": "ffdhe8192" 00:13:54.866 } 00:13:54.866 } 00:13:54.866 ]' 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.866 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.125 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:55.125 11:36:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:13:55.693 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:55.959 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.218 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.219 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.478 00:13:56.478 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.478 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.478 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.737 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.738 { 00:13:56.738 "cntlid": 49, 00:13:56.738 "qid": 0, 00:13:56.738 "state": "enabled", 00:13:56.738 "thread": "nvmf_tgt_poll_group_000", 00:13:56.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:56.738 "listen_address": { 00:13:56.738 "trtype": "RDMA", 00:13:56.738 "adrfam": "IPv4", 00:13:56.738 "traddr": "10.0.0.2", 00:13:56.738 "trsvcid": "4420" 00:13:56.738 }, 00:13:56.738 "peer_address": { 00:13:56.738 "trtype": "RDMA", 00:13:56.738 "adrfam": "IPv4", 00:13:56.738 "traddr": "10.0.0.2", 00:13:56.738 "trsvcid": "56692" 00:13:56.738 }, 00:13:56.738 "auth": { 00:13:56.738 "state": "completed", 00:13:56.738 "digest": "sha384", 00:13:56.738 "dhgroup": "null" 00:13:56.738 } 00:13:56.738 } 00:13:56.738 ]' 00:13:56.738 11:36:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.738 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.997 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:56.997 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:13:57.565 11:37:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.824 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.825 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.825 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.825 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.825 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.825 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.084 00:13:58.084 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.084 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.084 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.343 { 00:13:58.343 "cntlid": 51, 00:13:58.343 "qid": 0, 00:13:58.343 "state": "enabled", 00:13:58.343 "thread": "nvmf_tgt_poll_group_000", 00:13:58.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:13:58.343 "listen_address": { 00:13:58.343 "trtype": "RDMA", 00:13:58.343 "adrfam": "IPv4", 00:13:58.343 "traddr": "10.0.0.2", 00:13:58.343 "trsvcid": "4420" 00:13:58.343 }, 00:13:58.343 "peer_address": { 00:13:58.343 "trtype": "RDMA", 00:13:58.343 "adrfam": "IPv4", 00:13:58.343 "traddr": "10.0.0.2", 00:13:58.343 "trsvcid": "41547" 00:13:58.343 }, 00:13:58.343 "auth": { 00:13:58.343 "state": "completed", 00:13:58.343 "digest": "sha384", 00:13:58.343 "dhgroup": "null" 00:13:58.343 } 00:13:58.343 } 00:13:58.343 ]' 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.343 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.602 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:58.602 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.602 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.602 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.602 11:37:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.860 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:58.860 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.427 11:37:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.686 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.687 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.946 00:13:59.946 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.946 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.946 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.204 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.204 { 00:14:00.204 "cntlid": 53, 00:14:00.204 "qid": 0, 00:14:00.204 "state": "enabled", 00:14:00.204 "thread": "nvmf_tgt_poll_group_000", 00:14:00.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:00.204 "listen_address": { 00:14:00.204 "trtype": "RDMA", 00:14:00.204 "adrfam": "IPv4", 00:14:00.204 "traddr": "10.0.0.2", 00:14:00.204 "trsvcid": "4420" 00:14:00.204 }, 00:14:00.204 "peer_address": { 00:14:00.204 "trtype": "RDMA", 00:14:00.204 "adrfam": "IPv4", 00:14:00.204 "traddr": "10.0.0.2", 00:14:00.204 "trsvcid": "34193" 00:14:00.204 }, 00:14:00.204 "auth": { 00:14:00.204 "state": "completed", 00:14:00.204 "digest": "sha384", 00:14:00.205 "dhgroup": "null" 00:14:00.205 } 00:14:00.205 } 00:14:00.205 ]' 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.205 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.462 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:00.462 11:37:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:01.029 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.288 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.547 11:37:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.806 00:14:01.806 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.806 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.806 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.065 { 00:14:02.065 "cntlid": 55, 00:14:02.065 "qid": 0, 00:14:02.065 "state": "enabled", 00:14:02.065 "thread": "nvmf_tgt_poll_group_000", 00:14:02.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:02.065 "listen_address": { 00:14:02.065 "trtype": "RDMA", 00:14:02.065 "adrfam": "IPv4", 00:14:02.065 "traddr": "10.0.0.2", 00:14:02.065 "trsvcid": "4420" 00:14:02.065 }, 00:14:02.065 "peer_address": { 00:14:02.065 "trtype": "RDMA", 00:14:02.065 "adrfam": "IPv4", 00:14:02.065 "traddr": "10.0.0.2", 00:14:02.065 "trsvcid": "51478" 00:14:02.065 }, 00:14:02.065 "auth": { 00:14:02.065 "state": "completed", 00:14:02.065 "digest": "sha384", 00:14:02.065 "dhgroup": "null" 00:14:02.065 } 00:14:02.065 } 00:14:02.065 ]' 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.065 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.066 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.324 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:02.325 11:37:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:02.892 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.892 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:02.892 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.892 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.151 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.412 00:14:03.412 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.412 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.412 11:37:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.671 { 00:14:03.671 "cntlid": 57, 00:14:03.671 "qid": 0, 00:14:03.671 "state": "enabled", 00:14:03.671 "thread": "nvmf_tgt_poll_group_000", 00:14:03.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:03.671 "listen_address": { 00:14:03.671 "trtype": "RDMA", 00:14:03.671 "adrfam": "IPv4", 00:14:03.671 "traddr": "10.0.0.2", 00:14:03.671 "trsvcid": "4420" 00:14:03.671 }, 00:14:03.671 "peer_address": { 00:14:03.671 "trtype": "RDMA", 00:14:03.671 "adrfam": "IPv4", 00:14:03.671 "traddr": "10.0.0.2", 00:14:03.671 "trsvcid": "35495" 00:14:03.671 }, 00:14:03.671 "auth": { 00:14:03.671 "state": "completed", 00:14:03.671 "digest": "sha384", 00:14:03.671 "dhgroup": "ffdhe2048" 00:14:03.671 } 00:14:03.671 } 00:14:03.671 ]' 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:03.671 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.929 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.929 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.929 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.929 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.929 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.187 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:04.187 11:37:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:04.754 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.012 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.013 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.271 00:14:05.271 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.271 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.271 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.530 { 00:14:05.530 "cntlid": 59, 00:14:05.530 "qid": 0, 00:14:05.530 "state": "enabled", 00:14:05.530 "thread": "nvmf_tgt_poll_group_000", 00:14:05.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:05.530 "listen_address": { 00:14:05.530 "trtype": "RDMA", 00:14:05.530 "adrfam": "IPv4", 00:14:05.530 "traddr": "10.0.0.2", 00:14:05.530 "trsvcid": "4420" 00:14:05.530 }, 00:14:05.530 "peer_address": { 00:14:05.530 "trtype": "RDMA", 00:14:05.530 "adrfam": "IPv4", 00:14:05.530 "traddr": "10.0.0.2", 00:14:05.530 "trsvcid": "42352" 00:14:05.530 }, 00:14:05.530 "auth": { 00:14:05.530 "state": "completed", 00:14:05.530 "digest": "sha384", 00:14:05.530 "dhgroup": "ffdhe2048" 00:14:05.530 } 00:14:05.530 } 00:14:05.530 ]' 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.530 11:37:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.799 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:05.799 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:06.370 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.629 11:37:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.888 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.147 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.147 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.406 { 00:14:07.406 "cntlid": 61, 00:14:07.406 "qid": 0, 00:14:07.406 "state": "enabled", 00:14:07.406 "thread": "nvmf_tgt_poll_group_000", 00:14:07.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:07.406 "listen_address": { 00:14:07.406 "trtype": "RDMA", 00:14:07.406 "adrfam": "IPv4", 00:14:07.406 "traddr": "10.0.0.2", 00:14:07.406 "trsvcid": "4420" 00:14:07.406 }, 00:14:07.406 "peer_address": { 00:14:07.406 "trtype": "RDMA", 00:14:07.406 "adrfam": "IPv4", 00:14:07.406 "traddr": "10.0.0.2", 00:14:07.406 "trsvcid": "58138" 00:14:07.406 }, 00:14:07.406 "auth": { 00:14:07.406 "state": "completed", 00:14:07.406 "digest": "sha384", 00:14:07.406 "dhgroup": "ffdhe2048" 00:14:07.406 } 00:14:07.406 } 00:14:07.406 ]' 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.406 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.665 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:07.666 11:37:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.234 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.494 11:37:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.754 00:14:08.754 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.754 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.754 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.013 { 00:14:09.013 "cntlid": 63, 00:14:09.013 "qid": 0, 00:14:09.013 "state": "enabled", 00:14:09.013 "thread": "nvmf_tgt_poll_group_000", 00:14:09.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:09.013 "listen_address": { 00:14:09.013 "trtype": "RDMA", 00:14:09.013 "adrfam": "IPv4", 00:14:09.013 "traddr": "10.0.0.2", 00:14:09.013 "trsvcid": "4420" 00:14:09.013 }, 00:14:09.013 "peer_address": { 00:14:09.013 "trtype": "RDMA", 00:14:09.013 "adrfam": "IPv4", 00:14:09.013 "traddr": "10.0.0.2", 00:14:09.013 "trsvcid": "33278" 00:14:09.013 }, 00:14:09.013 "auth": { 00:14:09.013 "state": "completed", 00:14:09.013 "digest": "sha384", 00:14:09.013 "dhgroup": "ffdhe2048" 00:14:09.013 } 00:14:09.013 } 00:14:09.013 ]' 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.013 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:09.273 11:37:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.210 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.469 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.470 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.729 00:14:10.729 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.729 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.729 11:37:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.729 { 00:14:10.729 "cntlid": 65, 00:14:10.729 "qid": 0, 00:14:10.729 "state": "enabled", 00:14:10.729 "thread": "nvmf_tgt_poll_group_000", 00:14:10.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:10.729 "listen_address": { 00:14:10.729 "trtype": "RDMA", 00:14:10.729 "adrfam": "IPv4", 00:14:10.729 "traddr": "10.0.0.2", 00:14:10.729 "trsvcid": "4420" 00:14:10.729 }, 00:14:10.729 "peer_address": { 00:14:10.729 "trtype": "RDMA", 00:14:10.729 "adrfam": "IPv4", 00:14:10.729 "traddr": "10.0.0.2", 00:14:10.729 "trsvcid": "56676" 00:14:10.729 }, 00:14:10.729 "auth": { 00:14:10.729 "state": "completed", 00:14:10.729 "digest": "sha384", 00:14:10.729 "dhgroup": "ffdhe3072" 00:14:10.729 } 00:14:10.729 } 00:14:10.729 ]' 00:14:10.729 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.989 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.249 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:11.249 11:37:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:11.818 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.077 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.336 00:14:12.336 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.336 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.336 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.596 { 00:14:12.596 "cntlid": 67, 00:14:12.596 "qid": 0, 00:14:12.596 "state": "enabled", 00:14:12.596 "thread": "nvmf_tgt_poll_group_000", 00:14:12.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:12.596 "listen_address": { 00:14:12.596 "trtype": "RDMA", 00:14:12.596 "adrfam": "IPv4", 00:14:12.596 "traddr": "10.0.0.2", 00:14:12.596 "trsvcid": "4420" 00:14:12.596 }, 00:14:12.596 "peer_address": { 00:14:12.596 "trtype": "RDMA", 00:14:12.596 "adrfam": "IPv4", 00:14:12.596 "traddr": "10.0.0.2", 00:14:12.596 "trsvcid": "55849" 00:14:12.596 }, 00:14:12.596 "auth": { 00:14:12.596 "state": "completed", 00:14:12.596 "digest": "sha384", 00:14:12.596 "dhgroup": "ffdhe3072" 00:14:12.596 } 00:14:12.596 } 00:14:12.596 ]' 00:14:12.596 11:37:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.596 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.596 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.596 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.596 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.856 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.856 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.856 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.856 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:12.856 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:13.792 11:37:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:13.792 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.049 00:14:14.050 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.050 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.050 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.309 { 00:14:14.309 "cntlid": 69, 00:14:14.309 "qid": 0, 00:14:14.309 "state": "enabled", 00:14:14.309 "thread": "nvmf_tgt_poll_group_000", 00:14:14.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:14.309 "listen_address": { 00:14:14.309 "trtype": "RDMA", 00:14:14.309 "adrfam": "IPv4", 00:14:14.309 "traddr": "10.0.0.2", 00:14:14.309 "trsvcid": "4420" 00:14:14.309 }, 00:14:14.309 "peer_address": { 00:14:14.309 "trtype": "RDMA", 00:14:14.309 "adrfam": "IPv4", 00:14:14.309 "traddr": "10.0.0.2", 00:14:14.309 "trsvcid": "57960" 00:14:14.309 }, 00:14:14.309 "auth": { 00:14:14.309 "state": "completed", 00:14:14.309 "digest": "sha384", 00:14:14.309 "dhgroup": "ffdhe3072" 00:14:14.309 } 00:14:14.309 } 00:14:14.309 ]' 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.309 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.653 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.653 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.653 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.653 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.653 11:37:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.653 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:14.653 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:15.220 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.480 11:37:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.739 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.997 00:14:15.997 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.997 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.997 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.255 { 00:14:16.255 "cntlid": 71, 00:14:16.255 "qid": 0, 00:14:16.255 "state": "enabled", 00:14:16.255 "thread": "nvmf_tgt_poll_group_000", 00:14:16.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:16.255 "listen_address": { 00:14:16.255 "trtype": "RDMA", 00:14:16.255 "adrfam": "IPv4", 00:14:16.255 "traddr": "10.0.0.2", 00:14:16.255 "trsvcid": "4420" 00:14:16.255 }, 00:14:16.255 "peer_address": { 00:14:16.255 "trtype": "RDMA", 00:14:16.255 "adrfam": "IPv4", 00:14:16.255 "traddr": "10.0.0.2", 00:14:16.255 "trsvcid": "37119" 00:14:16.255 }, 00:14:16.255 "auth": { 00:14:16.255 "state": "completed", 00:14:16.255 "digest": "sha384", 00:14:16.255 "dhgroup": "ffdhe3072" 00:14:16.255 } 00:14:16.255 } 00:14:16.255 ]' 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.255 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.256 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.515 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:16.515 11:37:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:17.083 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.342 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.343 11:37:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.911 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.911 { 00:14:17.911 "cntlid": 73, 00:14:17.911 "qid": 0, 00:14:17.911 "state": "enabled", 00:14:17.911 "thread": "nvmf_tgt_poll_group_000", 00:14:17.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:17.911 "listen_address": { 00:14:17.911 "trtype": "RDMA", 00:14:17.911 "adrfam": "IPv4", 00:14:17.911 "traddr": "10.0.0.2", 00:14:17.911 "trsvcid": "4420" 00:14:17.911 }, 00:14:17.911 "peer_address": { 00:14:17.911 "trtype": "RDMA", 00:14:17.911 "adrfam": "IPv4", 00:14:17.911 "traddr": "10.0.0.2", 00:14:17.911 "trsvcid": "56862" 00:14:17.911 }, 00:14:17.911 "auth": { 00:14:17.911 "state": "completed", 00:14:17.911 "digest": "sha384", 00:14:17.911 "dhgroup": "ffdhe4096" 00:14:17.911 } 00:14:17.911 } 00:14:17.911 ]' 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.911 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.171 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:18.171 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.171 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.171 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.171 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.430 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:18.430 11:37:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.998 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.257 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.517 00:14:19.517 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.517 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.517 11:37:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.776 { 00:14:19.776 "cntlid": 75, 00:14:19.776 "qid": 0, 00:14:19.776 "state": "enabled", 00:14:19.776 "thread": "nvmf_tgt_poll_group_000", 00:14:19.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:19.776 "listen_address": { 00:14:19.776 "trtype": "RDMA", 00:14:19.776 "adrfam": "IPv4", 00:14:19.776 "traddr": "10.0.0.2", 00:14:19.776 "trsvcid": "4420" 00:14:19.776 }, 00:14:19.776 "peer_address": { 00:14:19.776 "trtype": "RDMA", 00:14:19.776 "adrfam": "IPv4", 00:14:19.776 "traddr": "10.0.0.2", 00:14:19.776 "trsvcid": "57302" 00:14:19.776 }, 00:14:19.776 "auth": { 00:14:19.776 "state": "completed", 00:14:19.776 "digest": "sha384", 00:14:19.776 "dhgroup": "ffdhe4096" 00:14:19.776 } 00:14:19.776 } 00:14:19.776 ]' 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.776 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.036 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:20.036 11:37:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.974 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.975 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.975 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.234 00:14:21.234 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.234 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.234 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.493 { 00:14:21.493 "cntlid": 77, 00:14:21.493 "qid": 0, 00:14:21.493 "state": "enabled", 00:14:21.493 "thread": "nvmf_tgt_poll_group_000", 00:14:21.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:21.493 "listen_address": { 00:14:21.493 "trtype": "RDMA", 00:14:21.493 "adrfam": "IPv4", 00:14:21.493 "traddr": "10.0.0.2", 00:14:21.493 "trsvcid": "4420" 00:14:21.493 }, 00:14:21.493 "peer_address": { 00:14:21.493 "trtype": "RDMA", 00:14:21.493 "adrfam": "IPv4", 00:14:21.493 "traddr": "10.0.0.2", 00:14:21.493 "trsvcid": "50813" 00:14:21.493 }, 00:14:21.493 "auth": { 00:14:21.493 "state": "completed", 00:14:21.493 "digest": "sha384", 00:14:21.493 "dhgroup": "ffdhe4096" 00:14:21.493 } 00:14:21.493 } 00:14:21.493 ]' 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.493 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.753 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.753 11:37:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.753 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.753 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.753 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.013 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:22.013 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.581 11:37:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.842 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.101 00:14:23.101 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.101 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.101 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.361 { 00:14:23.361 "cntlid": 79, 00:14:23.361 "qid": 0, 00:14:23.361 "state": "enabled", 00:14:23.361 "thread": "nvmf_tgt_poll_group_000", 00:14:23.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:23.361 "listen_address": { 00:14:23.361 "trtype": "RDMA", 00:14:23.361 "adrfam": "IPv4", 00:14:23.361 "traddr": "10.0.0.2", 00:14:23.361 "trsvcid": "4420" 00:14:23.361 }, 00:14:23.361 "peer_address": { 00:14:23.361 "trtype": "RDMA", 00:14:23.361 "adrfam": "IPv4", 00:14:23.361 "traddr": "10.0.0.2", 00:14:23.361 "trsvcid": "41545" 00:14:23.361 }, 00:14:23.361 "auth": { 00:14:23.361 "state": "completed", 00:14:23.361 "digest": "sha384", 00:14:23.361 "dhgroup": "ffdhe4096" 00:14:23.361 } 00:14:23.361 } 00:14:23.361 ]' 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.361 11:37:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.620 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:23.620 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:24.187 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:24.445 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.704 11:37:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.963 00:14:24.963 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.963 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.963 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.221 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.221 { 00:14:25.221 "cntlid": 81, 00:14:25.221 "qid": 0, 00:14:25.221 "state": "enabled", 00:14:25.221 "thread": "nvmf_tgt_poll_group_000", 00:14:25.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:25.221 "listen_address": { 00:14:25.221 "trtype": "RDMA", 00:14:25.221 "adrfam": "IPv4", 00:14:25.221 "traddr": "10.0.0.2", 00:14:25.221 "trsvcid": "4420" 00:14:25.221 }, 00:14:25.221 "peer_address": { 00:14:25.221 "trtype": "RDMA", 00:14:25.222 "adrfam": "IPv4", 00:14:25.222 "traddr": "10.0.0.2", 00:14:25.222 "trsvcid": "48557" 00:14:25.222 }, 00:14:25.222 "auth": { 00:14:25.222 "state": "completed", 00:14:25.222 "digest": "sha384", 00:14:25.222 "dhgroup": "ffdhe6144" 00:14:25.222 } 00:14:25.222 } 00:14:25.222 ]' 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.222 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.480 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:25.480 11:37:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:26.046 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.305 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.564 11:37:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.823 00:14:26.823 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.823 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.823 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.082 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.082 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.083 { 00:14:27.083 "cntlid": 83, 00:14:27.083 "qid": 0, 00:14:27.083 "state": "enabled", 00:14:27.083 "thread": "nvmf_tgt_poll_group_000", 00:14:27.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:27.083 "listen_address": { 00:14:27.083 "trtype": "RDMA", 00:14:27.083 "adrfam": "IPv4", 00:14:27.083 "traddr": "10.0.0.2", 00:14:27.083 "trsvcid": "4420" 00:14:27.083 }, 00:14:27.083 "peer_address": { 00:14:27.083 "trtype": "RDMA", 00:14:27.083 "adrfam": "IPv4", 00:14:27.083 "traddr": "10.0.0.2", 00:14:27.083 "trsvcid": "58035" 00:14:27.083 }, 00:14:27.083 "auth": { 00:14:27.083 "state": "completed", 00:14:27.083 "digest": "sha384", 00:14:27.083 "dhgroup": "ffdhe6144" 00:14:27.083 } 00:14:27.083 } 00:14:27.083 ]' 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.083 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.342 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:27.342 11:37:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:27.910 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.170 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.430 11:37:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.690 00:14:28.690 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.690 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.690 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.950 { 00:14:28.950 "cntlid": 85, 00:14:28.950 "qid": 0, 00:14:28.950 "state": "enabled", 00:14:28.950 "thread": "nvmf_tgt_poll_group_000", 00:14:28.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:28.950 "listen_address": { 00:14:28.950 "trtype": "RDMA", 00:14:28.950 "adrfam": "IPv4", 00:14:28.950 "traddr": "10.0.0.2", 00:14:28.950 "trsvcid": "4420" 00:14:28.950 }, 00:14:28.950 "peer_address": { 00:14:28.950 "trtype": "RDMA", 00:14:28.950 "adrfam": "IPv4", 00:14:28.950 "traddr": "10.0.0.2", 00:14:28.950 "trsvcid": "45029" 00:14:28.950 }, 00:14:28.950 "auth": { 00:14:28.950 "state": "completed", 00:14:28.950 "digest": "sha384", 00:14:28.950 "dhgroup": "ffdhe6144" 00:14:28.950 } 00:14:28.950 } 00:14:28.950 ]' 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.950 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.209 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:29.209 11:37:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:29.777 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.035 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.293 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.552 00:14:30.552 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.552 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.552 11:37:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.811 { 00:14:30.811 "cntlid": 87, 00:14:30.811 "qid": 0, 00:14:30.811 "state": "enabled", 00:14:30.811 "thread": "nvmf_tgt_poll_group_000", 00:14:30.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:30.811 "listen_address": { 00:14:30.811 "trtype": "RDMA", 00:14:30.811 "adrfam": "IPv4", 00:14:30.811 "traddr": "10.0.0.2", 00:14:30.811 "trsvcid": "4420" 00:14:30.811 }, 00:14:30.811 "peer_address": { 00:14:30.811 "trtype": "RDMA", 00:14:30.811 "adrfam": "IPv4", 00:14:30.811 "traddr": "10.0.0.2", 00:14:30.811 "trsvcid": "47830" 00:14:30.811 }, 00:14:30.811 "auth": { 00:14:30.811 "state": "completed", 00:14:30.811 "digest": "sha384", 00:14:30.811 "dhgroup": "ffdhe6144" 00:14:30.811 } 00:14:30.811 } 00:14:30.811 ]' 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.811 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.069 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:31.069 11:37:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:31.636 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:31.894 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.153 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.412 00:14:32.670 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.670 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.670 11:37:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.670 { 00:14:32.670 "cntlid": 89, 00:14:32.670 "qid": 0, 00:14:32.670 "state": "enabled", 00:14:32.670 "thread": "nvmf_tgt_poll_group_000", 00:14:32.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:32.670 "listen_address": { 00:14:32.670 "trtype": "RDMA", 00:14:32.670 "adrfam": "IPv4", 00:14:32.670 "traddr": "10.0.0.2", 00:14:32.670 "trsvcid": "4420" 00:14:32.670 }, 00:14:32.670 "peer_address": { 00:14:32.670 "trtype": "RDMA", 00:14:32.670 "adrfam": "IPv4", 00:14:32.670 "traddr": "10.0.0.2", 00:14:32.670 "trsvcid": "41026" 00:14:32.670 }, 00:14:32.670 "auth": { 00:14:32.670 "state": "completed", 00:14:32.670 "digest": "sha384", 00:14:32.670 "dhgroup": "ffdhe8192" 00:14:32.670 } 00:14:32.670 } 00:14:32.670 ]' 00:14:32.670 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.928 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.186 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:33.186 11:37:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:33.752 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.010 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.577 00:14:34.577 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.577 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.577 11:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.836 { 00:14:34.836 "cntlid": 91, 00:14:34.836 "qid": 0, 00:14:34.836 "state": "enabled", 00:14:34.836 "thread": "nvmf_tgt_poll_group_000", 00:14:34.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:34.836 "listen_address": { 00:14:34.836 "trtype": "RDMA", 00:14:34.836 "adrfam": "IPv4", 00:14:34.836 "traddr": "10.0.0.2", 00:14:34.836 "trsvcid": "4420" 00:14:34.836 }, 00:14:34.836 "peer_address": { 00:14:34.836 "trtype": "RDMA", 00:14:34.836 "adrfam": "IPv4", 00:14:34.836 "traddr": "10.0.0.2", 00:14:34.836 "trsvcid": "35151" 00:14:34.836 }, 00:14:34.836 "auth": { 00:14:34.836 "state": "completed", 00:14:34.836 "digest": "sha384", 00:14:34.836 "dhgroup": "ffdhe8192" 00:14:34.836 } 00:14:34.836 } 00:14:34.836 ]' 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.836 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.094 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:35.094 11:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:35.662 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.926 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.499 00:14:36.499 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.499 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.499 11:37:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.758 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.758 { 00:14:36.758 "cntlid": 93, 00:14:36.758 "qid": 0, 00:14:36.758 "state": "enabled", 00:14:36.758 "thread": "nvmf_tgt_poll_group_000", 00:14:36.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:36.758 "listen_address": { 00:14:36.758 "trtype": "RDMA", 00:14:36.758 "adrfam": "IPv4", 00:14:36.758 "traddr": "10.0.0.2", 00:14:36.758 "trsvcid": "4420" 00:14:36.758 }, 00:14:36.758 "peer_address": { 00:14:36.758 "trtype": "RDMA", 00:14:36.758 "adrfam": "IPv4", 00:14:36.758 "traddr": "10.0.0.2", 00:14:36.758 "trsvcid": "55688" 00:14:36.758 }, 00:14:36.758 "auth": { 00:14:36.758 "state": "completed", 00:14:36.758 "digest": "sha384", 00:14:36.758 "dhgroup": "ffdhe8192" 00:14:36.758 } 00:14:36.759 } 00:14:36.759 ]' 00:14:36.759 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.759 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.759 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.759 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.759 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.017 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.017 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.017 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.017 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:37.017 11:37:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:37.953 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.211 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.470 00:14:38.470 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.470 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.470 11:37:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.728 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.728 { 00:14:38.728 "cntlid": 95, 00:14:38.728 "qid": 0, 00:14:38.728 "state": "enabled", 00:14:38.728 "thread": "nvmf_tgt_poll_group_000", 00:14:38.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:38.728 "listen_address": { 00:14:38.728 "trtype": "RDMA", 00:14:38.728 "adrfam": "IPv4", 00:14:38.728 "traddr": "10.0.0.2", 00:14:38.728 "trsvcid": "4420" 00:14:38.728 }, 00:14:38.728 "peer_address": { 00:14:38.728 "trtype": "RDMA", 00:14:38.728 "adrfam": "IPv4", 00:14:38.728 "traddr": "10.0.0.2", 00:14:38.729 "trsvcid": "57532" 00:14:38.729 }, 00:14:38.729 "auth": { 00:14:38.729 "state": "completed", 00:14:38.729 "digest": "sha384", 00:14:38.729 "dhgroup": "ffdhe8192" 00:14:38.729 } 00:14:38.729 } 00:14:38.729 ]' 00:14:38.729 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.729 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.729 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.987 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.987 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.987 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.987 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.987 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.319 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:39.319 11:37:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:39.887 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.146 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.404 00:14:40.404 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.404 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.404 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.662 { 00:14:40.662 "cntlid": 97, 00:14:40.662 "qid": 0, 00:14:40.662 "state": "enabled", 00:14:40.662 "thread": "nvmf_tgt_poll_group_000", 00:14:40.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:40.662 "listen_address": { 00:14:40.662 "trtype": "RDMA", 00:14:40.662 "adrfam": "IPv4", 00:14:40.662 "traddr": "10.0.0.2", 00:14:40.662 "trsvcid": "4420" 00:14:40.662 }, 00:14:40.662 "peer_address": { 00:14:40.662 "trtype": "RDMA", 00:14:40.662 "adrfam": "IPv4", 00:14:40.662 "traddr": "10.0.0.2", 00:14:40.662 "trsvcid": "50711" 00:14:40.662 }, 00:14:40.662 "auth": { 00:14:40.662 "state": "completed", 00:14:40.662 "digest": "sha512", 00:14:40.662 "dhgroup": "null" 00:14:40.662 } 00:14:40.662 } 00:14:40.662 ]' 00:14:40.662 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.663 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.663 11:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.663 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:40.663 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.663 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.663 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.663 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.922 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:40.922 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:41.489 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:41.748 11:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.748 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.749 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.749 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.749 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.007 00:14:42.007 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.007 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.007 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.265 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.265 { 00:14:42.265 "cntlid": 99, 00:14:42.265 "qid": 0, 00:14:42.265 "state": "enabled", 00:14:42.265 "thread": "nvmf_tgt_poll_group_000", 00:14:42.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:42.265 "listen_address": { 00:14:42.266 "trtype": "RDMA", 00:14:42.266 "adrfam": "IPv4", 00:14:42.266 "traddr": "10.0.0.2", 00:14:42.266 "trsvcid": "4420" 00:14:42.266 }, 00:14:42.266 "peer_address": { 00:14:42.266 "trtype": "RDMA", 00:14:42.266 "adrfam": "IPv4", 00:14:42.266 "traddr": "10.0.0.2", 00:14:42.266 "trsvcid": "36136" 00:14:42.266 }, 00:14:42.266 "auth": { 00:14:42.266 "state": "completed", 00:14:42.266 "digest": "sha512", 00:14:42.266 "dhgroup": "null" 00:14:42.266 } 00:14:42.266 } 00:14:42.266 ]' 00:14:42.266 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.266 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.266 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:42.524 11:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.459 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.460 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.718 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.718 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.718 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.718 11:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.718 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.977 { 00:14:43.977 "cntlid": 101, 00:14:43.977 "qid": 0, 00:14:43.977 "state": "enabled", 00:14:43.977 "thread": "nvmf_tgt_poll_group_000", 00:14:43.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:43.977 "listen_address": { 00:14:43.977 "trtype": "RDMA", 00:14:43.977 "adrfam": "IPv4", 00:14:43.977 "traddr": "10.0.0.2", 00:14:43.977 "trsvcid": "4420" 00:14:43.977 }, 00:14:43.977 "peer_address": { 00:14:43.977 "trtype": "RDMA", 00:14:43.977 "adrfam": "IPv4", 00:14:43.977 "traddr": "10.0.0.2", 00:14:43.977 "trsvcid": "52097" 00:14:43.977 }, 00:14:43.977 "auth": { 00:14:43.977 "state": "completed", 00:14:43.977 "digest": "sha512", 00:14:43.977 "dhgroup": "null" 00:14:43.977 } 00:14:43.977 } 00:14:43.977 ]' 00:14:43.977 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.236 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.236 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.236 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:44.236 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.237 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.237 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.237 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.495 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:44.495 11:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.062 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.063 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.321 11:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.579 00:14:45.579 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.579 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.579 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.837 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.837 { 00:14:45.837 "cntlid": 103, 00:14:45.837 "qid": 0, 00:14:45.837 "state": "enabled", 00:14:45.837 "thread": "nvmf_tgt_poll_group_000", 00:14:45.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:45.838 "listen_address": { 00:14:45.838 "trtype": "RDMA", 00:14:45.838 "adrfam": "IPv4", 00:14:45.838 "traddr": "10.0.0.2", 00:14:45.838 "trsvcid": "4420" 00:14:45.838 }, 00:14:45.838 "peer_address": { 00:14:45.838 "trtype": "RDMA", 00:14:45.838 "adrfam": "IPv4", 00:14:45.838 "traddr": "10.0.0.2", 00:14:45.838 "trsvcid": "56057" 00:14:45.838 }, 00:14:45.838 "auth": { 00:14:45.838 "state": "completed", 00:14:45.838 "digest": "sha512", 00:14:45.838 "dhgroup": "null" 00:14:45.838 } 00:14:45.838 } 00:14:45.838 ]' 00:14:45.838 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.838 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.838 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.838 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:45.838 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.096 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.096 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.096 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.096 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:46.096 11:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.030 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.289 00:14:47.289 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.289 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.289 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.548 { 00:14:47.548 "cntlid": 105, 00:14:47.548 "qid": 0, 00:14:47.548 "state": "enabled", 00:14:47.548 "thread": "nvmf_tgt_poll_group_000", 00:14:47.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:47.548 "listen_address": { 00:14:47.548 "trtype": "RDMA", 00:14:47.548 "adrfam": "IPv4", 00:14:47.548 "traddr": "10.0.0.2", 00:14:47.548 "trsvcid": "4420" 00:14:47.548 }, 00:14:47.548 "peer_address": { 00:14:47.548 "trtype": "RDMA", 00:14:47.548 "adrfam": "IPv4", 00:14:47.548 "traddr": "10.0.0.2", 00:14:47.548 "trsvcid": "46217" 00:14:47.548 }, 00:14:47.548 "auth": { 00:14:47.548 "state": "completed", 00:14:47.548 "digest": "sha512", 00:14:47.548 "dhgroup": "ffdhe2048" 00:14:47.548 } 00:14:47.548 } 00:14:47.548 ]' 00:14:47.548 11:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.548 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.548 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.807 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.807 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.807 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.807 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.807 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.066 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:48.066 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:48.633 11:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:48.633 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.892 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.151 00:14:49.151 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.151 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.151 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.409 { 00:14:49.409 "cntlid": 107, 00:14:49.409 "qid": 0, 00:14:49.409 "state": "enabled", 00:14:49.409 "thread": "nvmf_tgt_poll_group_000", 00:14:49.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:49.409 "listen_address": { 00:14:49.409 "trtype": "RDMA", 00:14:49.409 "adrfam": "IPv4", 00:14:49.409 "traddr": "10.0.0.2", 00:14:49.409 "trsvcid": "4420" 00:14:49.409 }, 00:14:49.409 "peer_address": { 00:14:49.409 "trtype": "RDMA", 00:14:49.409 "adrfam": "IPv4", 00:14:49.409 "traddr": "10.0.0.2", 00:14:49.409 "trsvcid": "37409" 00:14:49.409 }, 00:14:49.409 "auth": { 00:14:49.409 "state": "completed", 00:14:49.409 "digest": "sha512", 00:14:49.409 "dhgroup": "ffdhe2048" 00:14:49.409 } 00:14:49.409 } 00:14:49.409 ]' 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.409 11:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.668 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:49.668 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:50.235 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:50.493 11:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.753 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.011 00:14:51.011 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.011 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.012 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.270 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.270 { 00:14:51.270 "cntlid": 109, 00:14:51.270 "qid": 0, 00:14:51.270 "state": "enabled", 00:14:51.270 "thread": "nvmf_tgt_poll_group_000", 00:14:51.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:51.270 "listen_address": { 00:14:51.270 "trtype": "RDMA", 00:14:51.270 "adrfam": "IPv4", 00:14:51.270 "traddr": "10.0.0.2", 00:14:51.270 "trsvcid": "4420" 00:14:51.270 }, 00:14:51.270 "peer_address": { 00:14:51.270 "trtype": "RDMA", 00:14:51.270 "adrfam": "IPv4", 00:14:51.270 "traddr": "10.0.0.2", 00:14:51.270 "trsvcid": "54102" 00:14:51.270 }, 00:14:51.270 "auth": { 00:14:51.270 "state": "completed", 00:14:51.270 "digest": "sha512", 00:14:51.270 "dhgroup": "ffdhe2048" 00:14:51.270 } 00:14:51.270 } 00:14:51.271 ]' 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.271 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.529 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:51.529 11:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:52.096 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:52.354 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.355 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.355 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.355 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.355 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.355 11:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.614 00:14:52.614 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.614 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.614 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.872 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.872 { 00:14:52.872 "cntlid": 111, 00:14:52.872 "qid": 0, 00:14:52.872 "state": "enabled", 00:14:52.872 "thread": "nvmf_tgt_poll_group_000", 00:14:52.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:52.872 "listen_address": { 00:14:52.872 "trtype": "RDMA", 00:14:52.872 "adrfam": "IPv4", 00:14:52.872 "traddr": "10.0.0.2", 00:14:52.872 "trsvcid": "4420" 00:14:52.872 }, 00:14:52.872 "peer_address": { 00:14:52.873 "trtype": "RDMA", 00:14:52.873 "adrfam": "IPv4", 00:14:52.873 "traddr": "10.0.0.2", 00:14:52.873 "trsvcid": "51795" 00:14:52.873 }, 00:14:52.873 "auth": { 00:14:52.873 "state": "completed", 00:14:52.873 "digest": "sha512", 00:14:52.873 "dhgroup": "ffdhe2048" 00:14:52.873 } 00:14:52.873 } 00:14:52.873 ]' 00:14:52.873 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.131 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.390 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:53.390 11:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:53.957 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.215 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.474 00:14:54.474 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.474 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.474 11:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.732 { 00:14:54.732 "cntlid": 113, 00:14:54.732 "qid": 0, 00:14:54.732 "state": "enabled", 00:14:54.732 "thread": "nvmf_tgt_poll_group_000", 00:14:54.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:54.732 "listen_address": { 00:14:54.732 "trtype": "RDMA", 00:14:54.732 "adrfam": "IPv4", 00:14:54.732 "traddr": "10.0.0.2", 00:14:54.732 "trsvcid": "4420" 00:14:54.732 }, 00:14:54.732 "peer_address": { 00:14:54.732 "trtype": "RDMA", 00:14:54.732 "adrfam": "IPv4", 00:14:54.732 "traddr": "10.0.0.2", 00:14:54.732 "trsvcid": "49378" 00:14:54.732 }, 00:14:54.732 "auth": { 00:14:54.732 "state": "completed", 00:14:54.732 "digest": "sha512", 00:14:54.732 "dhgroup": "ffdhe3072" 00:14:54.732 } 00:14:54.732 } 00:14:54.732 ]' 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.732 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.990 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.990 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.990 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.990 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:54.990 11:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.183 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.442 { 00:14:56.442 "cntlid": 115, 00:14:56.442 "qid": 0, 00:14:56.442 "state": "enabled", 00:14:56.442 "thread": "nvmf_tgt_poll_group_000", 00:14:56.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:56.442 "listen_address": { 00:14:56.442 "trtype": "RDMA", 00:14:56.442 "adrfam": "IPv4", 00:14:56.442 "traddr": "10.0.0.2", 00:14:56.442 "trsvcid": "4420" 00:14:56.442 }, 00:14:56.442 "peer_address": { 00:14:56.442 "trtype": "RDMA", 00:14:56.442 "adrfam": "IPv4", 00:14:56.442 "traddr": "10.0.0.2", 00:14:56.442 "trsvcid": "54793" 00:14:56.442 }, 00:14:56.442 "auth": { 00:14:56.442 "state": "completed", 00:14:56.442 "digest": "sha512", 00:14:56.442 "dhgroup": "ffdhe3072" 00:14:56.442 } 00:14:56.442 } 00:14:56.442 ]' 00:14:56.442 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.700 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.700 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.700 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.700 11:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.700 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.700 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.700 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.959 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:56.959 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:57.529 11:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.789 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.048 00:14:58.048 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.048 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.048 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.306 { 00:14:58.306 "cntlid": 117, 00:14:58.306 "qid": 0, 00:14:58.306 "state": "enabled", 00:14:58.306 "thread": "nvmf_tgt_poll_group_000", 00:14:58.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:14:58.306 "listen_address": { 00:14:58.306 "trtype": "RDMA", 00:14:58.306 "adrfam": "IPv4", 00:14:58.306 "traddr": "10.0.0.2", 00:14:58.306 "trsvcid": "4420" 00:14:58.306 }, 00:14:58.306 "peer_address": { 00:14:58.306 "trtype": "RDMA", 00:14:58.306 "adrfam": "IPv4", 00:14:58.306 "traddr": "10.0.0.2", 00:14:58.306 "trsvcid": "57841" 00:14:58.306 }, 00:14:58.306 "auth": { 00:14:58.306 "state": "completed", 00:14:58.306 "digest": "sha512", 00:14:58.306 "dhgroup": "ffdhe3072" 00:14:58.306 } 00:14:58.306 } 00:14:58.306 ]' 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.306 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.564 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.564 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.564 11:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.564 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:58.564 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.501 11:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.760 00:14:59.760 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.760 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.761 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.019 { 00:15:00.019 "cntlid": 119, 00:15:00.019 "qid": 0, 00:15:00.019 "state": "enabled", 00:15:00.019 "thread": "nvmf_tgt_poll_group_000", 00:15:00.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:00.019 "listen_address": { 00:15:00.019 "trtype": "RDMA", 00:15:00.019 "adrfam": "IPv4", 00:15:00.019 "traddr": "10.0.0.2", 00:15:00.019 "trsvcid": "4420" 00:15:00.019 }, 00:15:00.019 "peer_address": { 00:15:00.019 "trtype": "RDMA", 00:15:00.019 "adrfam": "IPv4", 00:15:00.019 "traddr": "10.0.0.2", 00:15:00.019 "trsvcid": "51517" 00:15:00.019 }, 00:15:00.019 "auth": { 00:15:00.019 "state": "completed", 00:15:00.019 "digest": "sha512", 00:15:00.019 "dhgroup": "ffdhe3072" 00:15:00.019 } 00:15:00.019 } 00:15:00.019 ]' 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.019 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:00.277 11:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.211 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.470 11:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.728 00:15:01.728 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.728 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.728 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.986 { 00:15:01.986 "cntlid": 121, 00:15:01.986 "qid": 0, 00:15:01.986 "state": "enabled", 00:15:01.986 "thread": "nvmf_tgt_poll_group_000", 00:15:01.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:01.986 "listen_address": { 00:15:01.986 "trtype": "RDMA", 00:15:01.986 "adrfam": "IPv4", 00:15:01.986 "traddr": "10.0.0.2", 00:15:01.986 "trsvcid": "4420" 00:15:01.986 }, 00:15:01.986 "peer_address": { 00:15:01.986 "trtype": "RDMA", 00:15:01.986 "adrfam": "IPv4", 00:15:01.986 "traddr": "10.0.0.2", 00:15:01.986 "trsvcid": "56482" 00:15:01.986 }, 00:15:01.986 "auth": { 00:15:01.986 "state": "completed", 00:15:01.986 "digest": "sha512", 00:15:01.986 "dhgroup": "ffdhe4096" 00:15:01.986 } 00:15:01.986 } 00:15:01.986 ]' 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.986 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.987 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.987 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.987 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.244 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:02.244 11:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:02.862 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.862 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:02.862 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.862 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.153 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.154 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.154 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.154 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.154 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.154 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.412 00:15:03.412 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.412 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.412 11:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.670 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.670 { 00:15:03.670 "cntlid": 123, 00:15:03.670 "qid": 0, 00:15:03.670 "state": "enabled", 00:15:03.670 "thread": "nvmf_tgt_poll_group_000", 00:15:03.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:03.670 "listen_address": { 00:15:03.670 "trtype": "RDMA", 00:15:03.670 "adrfam": "IPv4", 00:15:03.670 "traddr": "10.0.0.2", 00:15:03.670 "trsvcid": "4420" 00:15:03.670 }, 00:15:03.670 "peer_address": { 00:15:03.670 "trtype": "RDMA", 00:15:03.670 "adrfam": "IPv4", 00:15:03.670 "traddr": "10.0.0.2", 00:15:03.670 "trsvcid": "36880" 00:15:03.670 }, 00:15:03.670 "auth": { 00:15:03.670 "state": "completed", 00:15:03.670 "digest": "sha512", 00:15:03.671 "dhgroup": "ffdhe4096" 00:15:03.671 } 00:15:03.671 } 00:15:03.671 ]' 00:15:03.671 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.671 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.671 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.671 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.671 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.928 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.928 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.928 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.928 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:03.928 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:04.513 11:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:04.771 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.030 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.288 00:15:05.288 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.288 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.288 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.548 { 00:15:05.548 "cntlid": 125, 00:15:05.548 "qid": 0, 00:15:05.548 "state": "enabled", 00:15:05.548 "thread": "nvmf_tgt_poll_group_000", 00:15:05.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:05.548 "listen_address": { 00:15:05.548 "trtype": "RDMA", 00:15:05.548 "adrfam": "IPv4", 00:15:05.548 "traddr": "10.0.0.2", 00:15:05.548 "trsvcid": "4420" 00:15:05.548 }, 00:15:05.548 "peer_address": { 00:15:05.548 "trtype": "RDMA", 00:15:05.548 "adrfam": "IPv4", 00:15:05.548 "traddr": "10.0.0.2", 00:15:05.548 "trsvcid": "33367" 00:15:05.548 }, 00:15:05.548 "auth": { 00:15:05.548 "state": "completed", 00:15:05.548 "digest": "sha512", 00:15:05.548 "dhgroup": "ffdhe4096" 00:15:05.548 } 00:15:05.548 } 00:15:05.548 ]' 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.548 11:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.807 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:05.808 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:06.376 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:06.635 11:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.893 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.152 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.152 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.410 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.410 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.410 { 00:15:07.410 "cntlid": 127, 00:15:07.410 "qid": 0, 00:15:07.410 "state": "enabled", 00:15:07.410 "thread": "nvmf_tgt_poll_group_000", 00:15:07.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:07.411 "listen_address": { 00:15:07.411 "trtype": "RDMA", 00:15:07.411 "adrfam": "IPv4", 00:15:07.411 "traddr": "10.0.0.2", 00:15:07.411 "trsvcid": "4420" 00:15:07.411 }, 00:15:07.411 "peer_address": { 00:15:07.411 "trtype": "RDMA", 00:15:07.411 "adrfam": "IPv4", 00:15:07.411 "traddr": "10.0.0.2", 00:15:07.411 "trsvcid": "52898" 00:15:07.411 }, 00:15:07.411 "auth": { 00:15:07.411 "state": "completed", 00:15:07.411 "digest": "sha512", 00:15:07.411 "dhgroup": "ffdhe4096" 00:15:07.411 } 00:15:07.411 } 00:15:07.411 ]' 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.411 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.670 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:07.670 11:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.235 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.494 11:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.061 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.061 { 00:15:09.061 "cntlid": 129, 00:15:09.061 "qid": 0, 00:15:09.061 "state": "enabled", 00:15:09.061 "thread": "nvmf_tgt_poll_group_000", 00:15:09.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:09.061 "listen_address": { 00:15:09.061 "trtype": "RDMA", 00:15:09.061 "adrfam": "IPv4", 00:15:09.061 "traddr": "10.0.0.2", 00:15:09.061 "trsvcid": "4420" 00:15:09.061 }, 00:15:09.061 "peer_address": { 00:15:09.061 "trtype": "RDMA", 00:15:09.061 "adrfam": "IPv4", 00:15:09.061 "traddr": "10.0.0.2", 00:15:09.061 "trsvcid": "33820" 00:15:09.061 }, 00:15:09.061 "auth": { 00:15:09.061 "state": "completed", 00:15:09.061 "digest": "sha512", 00:15:09.061 "dhgroup": "ffdhe6144" 00:15:09.061 } 00:15:09.061 } 00:15:09.061 ]' 00:15:09.061 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.062 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.062 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.320 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.320 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.320 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.320 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.320 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.579 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:09.579 11:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:10.145 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.404 11:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.663 00:15:10.663 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.663 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.663 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.923 { 00:15:10.923 "cntlid": 131, 00:15:10.923 "qid": 0, 00:15:10.923 "state": "enabled", 00:15:10.923 "thread": "nvmf_tgt_poll_group_000", 00:15:10.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:10.923 "listen_address": { 00:15:10.923 "trtype": "RDMA", 00:15:10.923 "adrfam": "IPv4", 00:15:10.923 "traddr": "10.0.0.2", 00:15:10.923 "trsvcid": "4420" 00:15:10.923 }, 00:15:10.923 "peer_address": { 00:15:10.923 "trtype": "RDMA", 00:15:10.923 "adrfam": "IPv4", 00:15:10.923 "traddr": "10.0.0.2", 00:15:10.923 "trsvcid": "58811" 00:15:10.923 }, 00:15:10.923 "auth": { 00:15:10.923 "state": "completed", 00:15:10.923 "digest": "sha512", 00:15:10.923 "dhgroup": "ffdhe6144" 00:15:10.923 } 00:15:10.923 } 00:15:10.923 ]' 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.923 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.181 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.181 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.181 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.181 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.181 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.441 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:11.441 11:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:12.007 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.265 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.266 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.523 00:15:12.523 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.523 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.523 11:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.781 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.781 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.782 { 00:15:12.782 "cntlid": 133, 00:15:12.782 "qid": 0, 00:15:12.782 "state": "enabled", 00:15:12.782 "thread": "nvmf_tgt_poll_group_000", 00:15:12.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:12.782 "listen_address": { 00:15:12.782 "trtype": "RDMA", 00:15:12.782 "adrfam": "IPv4", 00:15:12.782 "traddr": "10.0.0.2", 00:15:12.782 "trsvcid": "4420" 00:15:12.782 }, 00:15:12.782 "peer_address": { 00:15:12.782 "trtype": "RDMA", 00:15:12.782 "adrfam": "IPv4", 00:15:12.782 "traddr": "10.0.0.2", 00:15:12.782 "trsvcid": "44691" 00:15:12.782 }, 00:15:12.782 "auth": { 00:15:12.782 "state": "completed", 00:15:12.782 "digest": "sha512", 00:15:12.782 "dhgroup": "ffdhe6144" 00:15:12.782 } 00:15:12.782 } 00:15:12.782 ]' 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.782 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.041 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.041 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.041 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.041 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.041 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.299 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:13.299 11:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:13.867 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.126 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.385 00:15:14.385 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.385 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.385 11:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.643 { 00:15:14.643 "cntlid": 135, 00:15:14.643 "qid": 0, 00:15:14.643 "state": "enabled", 00:15:14.643 "thread": "nvmf_tgt_poll_group_000", 00:15:14.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:14.643 "listen_address": { 00:15:14.643 "trtype": "RDMA", 00:15:14.643 "adrfam": "IPv4", 00:15:14.643 "traddr": "10.0.0.2", 00:15:14.643 "trsvcid": "4420" 00:15:14.643 }, 00:15:14.643 "peer_address": { 00:15:14.643 "trtype": "RDMA", 00:15:14.643 "adrfam": "IPv4", 00:15:14.643 "traddr": "10.0.0.2", 00:15:14.643 "trsvcid": "42251" 00:15:14.643 }, 00:15:14.643 "auth": { 00:15:14.643 "state": "completed", 00:15:14.643 "digest": "sha512", 00:15:14.643 "dhgroup": "ffdhe6144" 00:15:14.643 } 00:15:14.643 } 00:15:14.643 ]' 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.643 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.903 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.903 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.903 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.903 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.903 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.162 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:15.163 11:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.730 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.731 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.731 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.731 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.990 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.558 00:15:16.558 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.558 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.558 11:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.817 { 00:15:16.817 "cntlid": 137, 00:15:16.817 "qid": 0, 00:15:16.817 "state": "enabled", 00:15:16.817 "thread": "nvmf_tgt_poll_group_000", 00:15:16.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:16.817 "listen_address": { 00:15:16.817 "trtype": "RDMA", 00:15:16.817 "adrfam": "IPv4", 00:15:16.817 "traddr": "10.0.0.2", 00:15:16.817 "trsvcid": "4420" 00:15:16.817 }, 00:15:16.817 "peer_address": { 00:15:16.817 "trtype": "RDMA", 00:15:16.817 "adrfam": "IPv4", 00:15:16.817 "traddr": "10.0.0.2", 00:15:16.817 "trsvcid": "56977" 00:15:16.817 }, 00:15:16.817 "auth": { 00:15:16.817 "state": "completed", 00:15:16.817 "digest": "sha512", 00:15:16.817 "dhgroup": "ffdhe8192" 00:15:16.817 } 00:15:16.817 } 00:15:16.817 ]' 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.817 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.076 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:17.076 11:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:17.643 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.903 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.471 00:15:18.472 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.472 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.472 11:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.731 { 00:15:18.731 "cntlid": 139, 00:15:18.731 "qid": 0, 00:15:18.731 "state": "enabled", 00:15:18.731 "thread": "nvmf_tgt_poll_group_000", 00:15:18.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:18.731 "listen_address": { 00:15:18.731 "trtype": "RDMA", 00:15:18.731 "adrfam": "IPv4", 00:15:18.731 "traddr": "10.0.0.2", 00:15:18.731 "trsvcid": "4420" 00:15:18.731 }, 00:15:18.731 "peer_address": { 00:15:18.731 "trtype": "RDMA", 00:15:18.731 "adrfam": "IPv4", 00:15:18.731 "traddr": "10.0.0.2", 00:15:18.731 "trsvcid": "44426" 00:15:18.731 }, 00:15:18.731 "auth": { 00:15:18.731 "state": "completed", 00:15:18.731 "digest": "sha512", 00:15:18.731 "dhgroup": "ffdhe8192" 00:15:18.731 } 00:15:18.731 } 00:15:18.731 ]' 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.731 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.990 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.990 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.990 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.990 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:18.990 11:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: --dhchap-ctrl-secret DHHC-1:02:MmI1NmMxNmNmN2E4MWE5MTU5N2Q0N2FlOTRhMGViNmE0ZGI3ZTA1MzhkZjVjOTVjUgeR8w==: 00:15:19.927 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.928 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.495 00:15:20.495 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.495 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.495 11:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.754 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.755 { 00:15:20.755 "cntlid": 141, 00:15:20.755 "qid": 0, 00:15:20.755 "state": "enabled", 00:15:20.755 "thread": "nvmf_tgt_poll_group_000", 00:15:20.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:20.755 "listen_address": { 00:15:20.755 "trtype": "RDMA", 00:15:20.755 "adrfam": "IPv4", 00:15:20.755 "traddr": "10.0.0.2", 00:15:20.755 "trsvcid": "4420" 00:15:20.755 }, 00:15:20.755 "peer_address": { 00:15:20.755 "trtype": "RDMA", 00:15:20.755 "adrfam": "IPv4", 00:15:20.755 "traddr": "10.0.0.2", 00:15:20.755 "trsvcid": "59463" 00:15:20.755 }, 00:15:20.755 "auth": { 00:15:20.755 "state": "completed", 00:15:20.755 "digest": "sha512", 00:15:20.755 "dhgroup": "ffdhe8192" 00:15:20.755 } 00:15:20.755 } 00:15:20.755 ]' 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.755 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.013 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:21.014 11:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:01:NmFkZmMyZmY3OTg4MzE0ZWE5ODVkNjdhZTFkMWExNTZEEEPs: 00:15:21.580 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:21.839 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.099 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.667 00:15:22.667 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.667 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.667 11:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.667 { 00:15:22.667 "cntlid": 143, 00:15:22.667 "qid": 0, 00:15:22.667 "state": "enabled", 00:15:22.667 "thread": "nvmf_tgt_poll_group_000", 00:15:22.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:22.667 "listen_address": { 00:15:22.667 "trtype": "RDMA", 00:15:22.667 "adrfam": "IPv4", 00:15:22.667 "traddr": "10.0.0.2", 00:15:22.667 "trsvcid": "4420" 00:15:22.667 }, 00:15:22.667 "peer_address": { 00:15:22.667 "trtype": "RDMA", 00:15:22.667 "adrfam": "IPv4", 00:15:22.667 "traddr": "10.0.0.2", 00:15:22.667 "trsvcid": "52026" 00:15:22.667 }, 00:15:22.667 "auth": { 00:15:22.667 "state": "completed", 00:15:22.667 "digest": "sha512", 00:15:22.667 "dhgroup": "ffdhe8192" 00:15:22.667 } 00:15:22.667 } 00:15:22.667 ]' 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.667 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.927 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.927 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.927 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.927 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.927 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.185 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:23.185 11:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.752 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.010 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.577 00:15:24.577 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.577 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.578 11:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.836 { 00:15:24.836 "cntlid": 145, 00:15:24.836 "qid": 0, 00:15:24.836 "state": "enabled", 00:15:24.836 "thread": "nvmf_tgt_poll_group_000", 00:15:24.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:24.836 "listen_address": { 00:15:24.836 "trtype": "RDMA", 00:15:24.836 "adrfam": "IPv4", 00:15:24.836 "traddr": "10.0.0.2", 00:15:24.836 "trsvcid": "4420" 00:15:24.836 }, 00:15:24.836 "peer_address": { 00:15:24.836 "trtype": "RDMA", 00:15:24.836 "adrfam": "IPv4", 00:15:24.836 "traddr": "10.0.0.2", 00:15:24.836 "trsvcid": "43274" 00:15:24.836 }, 00:15:24.836 "auth": { 00:15:24.836 "state": "completed", 00:15:24.836 "digest": "sha512", 00:15:24.836 "dhgroup": "ffdhe8192" 00:15:24.836 } 00:15:24.836 } 00:15:24.836 ]' 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.836 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.095 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:25.095 11:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTQ3YWYzMDA5N2YyM2VlNDBlMDQ3ZWFkMGM2ZjRmMzJjYWJmYzIxNTdmOGFlYjQ0XEcaWQ==: --dhchap-ctrl-secret DHHC-1:03:MTJhMWFhMTRmMGYyOGY2MWZlMzJiZmRhNTQ1MDI4MzI2N2FlMDdhODAxMjFjZTk3NTNhNzY2Yzc4N2Y3OTc1NME2dGM=: 00:15:25.660 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:25.920 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:26.488 request: 00:15:26.488 { 00:15:26.488 "name": "nvme0", 00:15:26.488 "trtype": "rdma", 00:15:26.488 "traddr": "10.0.0.2", 00:15:26.488 "adrfam": "ipv4", 00:15:26.488 "trsvcid": "4420", 00:15:26.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:26.488 "prchk_reftag": false, 00:15:26.488 "prchk_guard": false, 00:15:26.488 "hdgst": false, 00:15:26.488 "ddgst": false, 00:15:26.488 "dhchap_key": "key2", 00:15:26.488 "allow_unrecognized_csi": false, 00:15:26.488 "method": "bdev_nvme_attach_controller", 00:15:26.488 "req_id": 1 00:15:26.488 } 00:15:26.488 Got JSON-RPC error response 00:15:26.488 response: 00:15:26.488 { 00:15:26.488 "code": -5, 00:15:26.488 "message": "Input/output error" 00:15:26.488 } 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:26.488 11:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:26.797 request: 00:15:26.798 { 00:15:26.798 "name": "nvme0", 00:15:26.798 "trtype": "rdma", 00:15:26.798 "traddr": "10.0.0.2", 00:15:26.798 "adrfam": "ipv4", 00:15:26.798 "trsvcid": "4420", 00:15:26.798 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:26.798 "prchk_reftag": false, 00:15:26.798 "prchk_guard": false, 00:15:26.798 "hdgst": false, 00:15:26.798 "ddgst": false, 00:15:26.798 "dhchap_key": "key1", 00:15:26.798 "dhchap_ctrlr_key": "ckey2", 00:15:26.798 "allow_unrecognized_csi": false, 00:15:26.798 "method": "bdev_nvme_attach_controller", 00:15:26.798 "req_id": 1 00:15:26.798 } 00:15:26.798 Got JSON-RPC error response 00:15:26.798 response: 00:15:26.798 { 00:15:26.798 "code": -5, 00:15:26.798 "message": "Input/output error" 00:15:26.798 } 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.798 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.425 request: 00:15:27.425 { 00:15:27.425 "name": "nvme0", 00:15:27.425 "trtype": "rdma", 00:15:27.425 "traddr": "10.0.0.2", 00:15:27.425 "adrfam": "ipv4", 00:15:27.425 "trsvcid": "4420", 00:15:27.425 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:27.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:27.425 "prchk_reftag": false, 00:15:27.425 "prchk_guard": false, 00:15:27.425 "hdgst": false, 00:15:27.425 "ddgst": false, 00:15:27.425 "dhchap_key": "key1", 00:15:27.425 "dhchap_ctrlr_key": "ckey1", 00:15:27.425 "allow_unrecognized_csi": false, 00:15:27.425 "method": "bdev_nvme_attach_controller", 00:15:27.425 "req_id": 1 00:15:27.425 } 00:15:27.425 Got JSON-RPC error response 00:15:27.425 response: 00:15:27.425 { 00:15:27.425 "code": -5, 00:15:27.425 "message": "Input/output error" 00:15:27.425 } 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1613798 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1613798 ']' 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1613798 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613798 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613798' 00:15:27.425 killing process with pid 1613798 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1613798 00:15:27.425 11:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1613798 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1633220 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1633220 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1633220 ']' 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.684 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1633220 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1633220 ']' 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.943 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 null0 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.H6z 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oTx ]] 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oTx 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YxO 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.201 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.459 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.459 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.wZh ]] 00:15:28.459 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wZh 00:15:28.459 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.459 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KXF 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.jnX ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jnX 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.p22 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.460 11:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.025 nvme0n1 00:15:29.025 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.025 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.025 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.283 { 00:15:29.283 "cntlid": 1, 00:15:29.283 "qid": 0, 00:15:29.283 "state": "enabled", 00:15:29.283 "thread": "nvmf_tgt_poll_group_000", 00:15:29.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:29.283 "listen_address": { 00:15:29.283 "trtype": "RDMA", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.2", 00:15:29.283 "trsvcid": "4420" 00:15:29.283 }, 00:15:29.283 "peer_address": { 00:15:29.283 "trtype": "RDMA", 00:15:29.283 "adrfam": "IPv4", 00:15:29.283 "traddr": "10.0.0.2", 00:15:29.283 "trsvcid": "34261" 00:15:29.283 }, 00:15:29.283 "auth": { 00:15:29.283 "state": "completed", 00:15:29.283 "digest": "sha512", 00:15:29.283 "dhgroup": "ffdhe8192" 00:15:29.283 } 00:15:29.283 } 00:15:29.283 ]' 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.283 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.541 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.541 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.541 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.542 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.542 11:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.799 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:29.799 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key3 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:30.365 11:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.623 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.881 request: 00:15:30.881 { 00:15:30.881 "name": "nvme0", 00:15:30.881 "trtype": "rdma", 00:15:30.881 "traddr": "10.0.0.2", 00:15:30.881 "adrfam": "ipv4", 00:15:30.881 "trsvcid": "4420", 00:15:30.881 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:30.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:30.881 "prchk_reftag": false, 00:15:30.881 "prchk_guard": false, 00:15:30.881 "hdgst": false, 00:15:30.881 "ddgst": false, 00:15:30.881 "dhchap_key": "key3", 00:15:30.881 "allow_unrecognized_csi": false, 00:15:30.881 "method": "bdev_nvme_attach_controller", 00:15:30.881 "req_id": 1 00:15:30.881 } 00:15:30.881 Got JSON-RPC error response 00:15:30.881 response: 00:15:30.881 { 00:15:30.881 "code": -5, 00:15:30.881 "message": "Input/output error" 00:15:30.881 } 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:30.881 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.139 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.398 request: 00:15:31.398 { 00:15:31.398 "name": "nvme0", 00:15:31.398 "trtype": "rdma", 00:15:31.398 "traddr": "10.0.0.2", 00:15:31.398 "adrfam": "ipv4", 00:15:31.398 "trsvcid": "4420", 00:15:31.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:31.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:31.398 "prchk_reftag": false, 00:15:31.398 "prchk_guard": false, 00:15:31.398 "hdgst": false, 00:15:31.398 "ddgst": false, 00:15:31.398 "dhchap_key": "key3", 00:15:31.398 "allow_unrecognized_csi": false, 00:15:31.398 "method": "bdev_nvme_attach_controller", 00:15:31.398 "req_id": 1 00:15:31.398 } 00:15:31.398 Got JSON-RPC error response 00:15:31.398 response: 00:15:31.398 { 00:15:31.398 "code": -5, 00:15:31.398 "message": "Input/output error" 00:15:31.398 } 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:31.398 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:31.657 11:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:31.915 request: 00:15:31.915 { 00:15:31.915 "name": "nvme0", 00:15:31.915 "trtype": "rdma", 00:15:31.915 "traddr": "10.0.0.2", 00:15:31.915 "adrfam": "ipv4", 00:15:31.915 "trsvcid": "4420", 00:15:31.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:31.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:31.916 "prchk_reftag": false, 00:15:31.916 "prchk_guard": false, 00:15:31.916 "hdgst": false, 00:15:31.916 "ddgst": false, 00:15:31.916 "dhchap_key": "key0", 00:15:31.916 "dhchap_ctrlr_key": "key1", 00:15:31.916 "allow_unrecognized_csi": false, 00:15:31.916 "method": "bdev_nvme_attach_controller", 00:15:31.916 "req_id": 1 00:15:31.916 } 00:15:31.916 Got JSON-RPC error response 00:15:31.916 response: 00:15:31.916 { 00:15:31.916 "code": -5, 00:15:31.916 "message": "Input/output error" 00:15:31.916 } 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:31.916 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:32.174 nvme0n1 00:15:32.174 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:32.174 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:32.174 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.432 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.432 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.432 11:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:32.691 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.627 nvme0n1 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.627 11:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:33.627 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:33.627 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.885 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.885 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:33.885 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid 0049fda6-1adc-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: --dhchap-ctrl-secret DHHC-1:03:NzlmMmI3OTRlMjgwYTVjOTY5MzVlNDE5OWVkODc5YzVlODM2ZTAyM2MyMmIyOWI2MzBhMmQ5MzU4YzdlY2ZmMRXqla8=: 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.451 11:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:34.710 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:35.276 request: 00:15:35.276 { 00:15:35.276 "name": "nvme0", 00:15:35.276 "trtype": "rdma", 00:15:35.276 "traddr": "10.0.0.2", 00:15:35.276 "adrfam": "ipv4", 00:15:35.276 "trsvcid": "4420", 00:15:35.276 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:35.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562", 00:15:35.276 "prchk_reftag": false, 00:15:35.276 "prchk_guard": false, 00:15:35.276 "hdgst": false, 00:15:35.276 "ddgst": false, 00:15:35.276 "dhchap_key": "key1", 00:15:35.276 "allow_unrecognized_csi": false, 00:15:35.276 "method": "bdev_nvme_attach_controller", 00:15:35.276 "req_id": 1 00:15:35.276 } 00:15:35.276 Got JSON-RPC error response 00:15:35.276 response: 00:15:35.276 { 00:15:35.276 "code": -5, 00:15:35.276 "message": "Input/output error" 00:15:35.276 } 00:15:35.276 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:35.276 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.276 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.276 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.276 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.277 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.277 11:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:35.845 nvme0n1 00:15:35.845 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:35.845 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:35.845 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.105 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.105 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.105 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:36.364 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:36.623 nvme0n1 00:15:36.623 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:36.623 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:36.623 11:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.881 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.881 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.881 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: '' 2s 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: ]] 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OWM5ZTI4NGI4ZGI2N2ViMDUxNGFhNzMzOTU2MTRiMzebUAbE: 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:37.140 11:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.043 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: 2s 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: ]] 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWFhY2YwNjNlZDI2YjY5ZmQ3NzhjZWQwNWFmN2IwOWJiMzRhODAwNzU4ZWQ3YzdjfzsdNQ==: 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:39.044 11:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:41.574 11:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:42.140 nvme0n1 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.140 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.398 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:42.398 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:42.398 11:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:42.657 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:42.915 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:42.915 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:42.915 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.173 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.174 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.432 request: 00:15:43.432 { 00:15:43.432 "name": "nvme0", 00:15:43.432 "dhchap_key": "key1", 00:15:43.432 "dhchap_ctrlr_key": "key3", 00:15:43.432 "method": "bdev_nvme_set_keys", 00:15:43.432 "req_id": 1 00:15:43.432 } 00:15:43.432 Got JSON-RPC error response 00:15:43.432 response: 00:15:43.432 { 00:15:43.432 "code": -13, 00:15:43.432 "message": "Permission denied" 00:15:43.432 } 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:43.432 11:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.691 11:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:43.691 11:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.065 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.066 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.066 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.066 11:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.633 nvme0n1 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.633 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.197 request: 00:15:46.197 { 00:15:46.197 "name": "nvme0", 00:15:46.197 "dhchap_key": "key2", 00:15:46.197 "dhchap_ctrlr_key": "key0", 00:15:46.197 "method": "bdev_nvme_set_keys", 00:15:46.197 "req_id": 1 00:15:46.197 } 00:15:46.197 Got JSON-RPC error response 00:15:46.197 response: 00:15:46.197 { 00:15:46.197 "code": -13, 00:15:46.197 "message": "Permission denied" 00:15:46.197 } 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:46.197 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.456 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:46.456 11:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:47.392 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:47.392 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:47.392 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1613817 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1613817 ']' 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1613817 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613817 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613817' 00:15:47.651 killing process with pid 1613817 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1613817 00:15:47.651 11:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1613817 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:15:47.909 rmmod nvme_rdma 00:15:47.909 rmmod nvme_fabrics 00:15:47.909 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 1633220 ']' 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 1633220 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1633220 ']' 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1633220 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633220 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633220' 00:15:48.168 killing process with pid 1633220 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1633220 00:15:48.168 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1633220 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:15:48.427 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-save 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-restore 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.H6z /tmp/spdk.key-sha256.YxO /tmp/spdk.key-sha384.KXF /tmp/spdk.key-sha512.p22 /tmp/spdk.key-sha512.oTx /tmp/spdk.key-sha384.wZh /tmp/spdk.key-sha256.jnX '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:15:48.428 00:15:48.428 real 2m45.611s 00:15:48.428 user 6m21.789s 00:15:48.428 sys 0m24.736s 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.428 ************************************ 00:15:48.428 END TEST nvmf_auth_target 00:15:48.428 ************************************ 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.428 ************************************ 00:15:48.428 START TEST nvmf_srq_overwhelm 00:15:48.428 ************************************ 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:48.428 * Looking for test storage... 00:15:48.428 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.428 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.687 --rc genhtml_branch_coverage=1 00:15:48.687 --rc genhtml_function_coverage=1 00:15:48.687 --rc genhtml_legend=1 00:15:48.687 --rc geninfo_all_blocks=1 00:15:48.687 --rc geninfo_unexecuted_blocks=1 00:15:48.687 00:15:48.687 ' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.687 --rc genhtml_branch_coverage=1 00:15:48.687 --rc genhtml_function_coverage=1 00:15:48.687 --rc genhtml_legend=1 00:15:48.687 --rc geninfo_all_blocks=1 00:15:48.687 --rc geninfo_unexecuted_blocks=1 00:15:48.687 00:15:48.687 ' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.687 --rc genhtml_branch_coverage=1 00:15:48.687 --rc genhtml_function_coverage=1 00:15:48.687 --rc genhtml_legend=1 00:15:48.687 --rc geninfo_all_blocks=1 00:15:48.687 --rc geninfo_unexecuted_blocks=1 00:15:48.687 00:15:48.687 ' 00:15:48.687 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.687 --rc genhtml_branch_coverage=1 00:15:48.687 --rc genhtml_function_coverage=1 00:15:48.687 --rc genhtml_legend=1 00:15:48.687 --rc geninfo_all_blocks=1 00:15:48.688 --rc geninfo_unexecuted_blocks=1 00:15:48.688 00:15:48.688 ' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@50 -- # : 0 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:48.688 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@260 -- # remove_target_ns 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # xtrace_disable 00:15:48.688 11:38:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@131 -- # pci_devs=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@135 -- # net_devs=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@136 -- # e810=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@136 -- # local -ga e810 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@137 -- # x722=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@137 -- # local -ga x722 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@138 -- # mlx=() 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@138 -- # local -ga mlx 00:15:55.257 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:55.258 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:55.258 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:55.258 Found net devices under 0000:18:00.0: mlx_0_0 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:55.258 Found net devices under 0000:18:00.1: mlx_0_1 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@253 -- # get_rdma_if_list 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # rdma_devs=() 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@89 -- # continue 2 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@89 -- # continue 2 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@262 -- # is_hw=yes 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@61 -- # uname 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_cm 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_core 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_umad 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe iw_cm 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@28 -- # local -g _dev 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@44 -- # ips=() 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:55.258 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@58 -- # key_initiator=target1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@11 -- # local val=167772161 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:15:55.259 10.0.0.1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@11 -- # local val=167772162 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:15:55.259 10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:55.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:15:55.259 00:15:55.259 --- 10.0.0.2 ping statistics --- 00:15:55.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.259 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target0 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:55.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.012 ms 00:15:55.259 00:15:55.259 --- 10.0.0.2 ping statistics --- 00:15:55.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.259 rtt min/avg/max/mdev = 0.012/0.012/0.012/0.000 ms 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:55.259 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@270 -- # return 0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@107 -- # local dev=target1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # nvmfpid=1638953 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@329 -- # waitforlisten 1638953 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 1638953 ']' 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.260 11:38:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.519 [2024-11-20 11:38:58.769699] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:15:55.519 [2024-11-20 11:38:58.769760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.519 [2024-11-20 11:38:58.849217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.519 [2024-11-20 11:38:58.899430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.519 [2024-11-20 11:38:58.899472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.519 [2024-11-20 11:38:58.899482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.519 [2024-11-20 11:38:58.899491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.519 [2024-11-20 11:38:58.899498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.519 [2024-11-20 11:38:58.900965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.519 [2024-11-20 11:38:58.901060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.519 [2024-11-20 11:38:58.901139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.519 [2024-11-20 11:38:58.901142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 [2024-11-20 11:38:59.081929] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23b4220/0x23b8710) succeed. 00:15:55.778 [2024-11-20 11:38:59.090997] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23b58b0/0x23f9db0) succeed. 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 Malloc0 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.779 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:15:55.779 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.779 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:55.779 [2024-11-20 11:38:59.200108] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:15:55.779 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.779 11:38:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 10.0.0.2 -s 4420 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.785 Malloc1 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.785 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.786 11:39:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.163 Malloc2 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 10.0.0.2 -s 4420 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:58.163 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.164 11:39:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 Malloc3 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 10.0.0.2 -s 4420 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.099 11:39:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 Malloc4 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 10.0.0.2 -s 4420 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.034 11:39:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.968 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.227 Malloc5 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 10.0.0.2 -s 4420 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.227 11:39:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:16:02.163 11:39:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:02.163 [global] 00:16:02.163 thread=1 00:16:02.163 invalidate=1 00:16:02.163 rw=read 00:16:02.163 time_based=1 00:16:02.163 runtime=10 00:16:02.163 ioengine=libaio 00:16:02.163 direct=1 00:16:02.163 bs=1048576 00:16:02.163 iodepth=128 00:16:02.163 norandommap=1 00:16:02.163 numjobs=13 00:16:02.163 00:16:02.163 [job0] 00:16:02.163 filename=/dev/nvme0n1 00:16:02.163 [job1] 00:16:02.163 filename=/dev/nvme1n1 00:16:02.163 [job2] 00:16:02.163 filename=/dev/nvme2n1 00:16:02.163 [job3] 00:16:02.163 filename=/dev/nvme3n1 00:16:02.163 [job4] 00:16:02.163 filename=/dev/nvme4n1 00:16:02.163 [job5] 00:16:02.163 filename=/dev/nvme5n1 00:16:02.420 Could not set queue depth (nvme0n1) 00:16:02.420 Could not set queue depth (nvme1n1) 00:16:02.420 Could not set queue depth (nvme2n1) 00:16:02.420 Could not set queue depth (nvme3n1) 00:16:02.420 Could not set queue depth (nvme4n1) 00:16:02.420 Could not set queue depth (nvme5n1) 00:16:02.678 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:02.678 ... 00:16:02.678 fio-3.35 00:16:02.678 Starting 78 threads 00:16:17.570 00:16:17.570 job0: (groupid=0, jobs=1): err= 0: pid=1640253: Wed Nov 20 11:39:18 2024 00:16:17.570 read: IOPS=41, BW=41.9MiB/s (43.9MB/s)(527MiB/12576msec) 00:16:17.571 slat (usec): min=57, max=2092.2k, avg=19835.48, stdev=178363.71 00:16:17.571 clat (msec): min=250, max=12454, avg=2912.75, stdev=4183.57 00:16:17.571 lat (msec): min=252, max=12465, avg=2932.59, stdev=4196.90 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 259], 00:16:17.571 | 30.00th=[ 284], 40.00th=[ 384], 50.00th=[ 489], 60.00th=[ 609], 00:16:17.571 | 70.00th=[ 885], 80.00th=[ 8490], 90.00th=[10805], 95.00th=[10939], 00:16:17.571 | 99.00th=[10939], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:17.571 | 99.99th=[12416] 00:16:17.571 bw ( KiB/s): min= 1477, max=360448, per=3.40%, avg=102328.62, stdev=150881.14, samples=8 00:16:17.571 iops : min= 1, max= 352, avg=99.88, stdev=147.39, samples=8 00:16:17.571 lat (msec) : 500=51.04%, 750=16.32%, 1000=3.04%, >=2000=29.60% 00:16:17.571 cpu : usr=0.02%, sys=0.89%, ctx=732, majf=0, minf=32769 00:16:17.571 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.571 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640254: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=8, BW=8867KiB/s (9080kB/s)(109MiB/12588msec) 00:16:17.571 slat (usec): min=587, max=2108.9k, avg=96144.01, stdev=373354.99 00:16:17.571 clat (msec): min=2107, max=12581, avg=5688.77, stdev=3650.30 00:16:17.571 lat (msec): min=2741, max=12587, avg=5784.91, stdev=3692.87 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 2735], 5.00th=[ 2769], 10.00th=[ 2836], 20.00th=[ 3104], 00:16:17.571 | 30.00th=[ 3339], 40.00th=[ 3473], 50.00th=[ 3742], 60.00th=[ 4010], 00:16:17.571 | 70.00th=[ 6409], 80.00th=[10671], 90.00th=[12550], 95.00th=[12550], 00:16:17.571 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.571 | 99.99th=[12550] 00:16:17.571 lat (msec) : >=2000=100.00% 00:16:17.571 cpu : usr=0.02%, sys=0.71%, ctx=258, majf=0, minf=27905 00:16:17.571 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.571 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640255: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=12, BW=12.9MiB/s (13.6MB/s)(163MiB/12610msec) 00:16:17.571 slat (usec): min=648, max=2091.3k, avg=64384.83, stdev=320872.61 00:16:17.571 clat (msec): min=1183, max=12490, avg=9431.39, stdev=3624.05 00:16:17.571 lat (msec): min=1186, max=12499, avg=9495.77, stdev=3584.40 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 1200], 5.00th=[ 1955], 10.00th=[ 2089], 20.00th=[ 6141], 00:16:17.571 | 30.00th=[ 6409], 40.00th=[10671], 50.00th=[11476], 60.00th=[11745], 00:16:17.571 | 70.00th=[11879], 80.00th=[12147], 90.00th=[12281], 95.00th=[12416], 00:16:17.571 | 99.00th=[12416], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.571 | 99.99th=[12550] 00:16:17.571 bw ( KiB/s): min= 1404, max=26624, per=0.35%, avg=10435.29, stdev=10012.42, samples=7 00:16:17.571 iops : min= 1, max= 26, avg=10.00, stdev= 9.71, samples=7 00:16:17.571 lat (msec) : 2000=6.75%, >=2000=93.25% 00:16:17.571 cpu : usr=0.00%, sys=0.91%, ctx=276, majf=0, minf=32769 00:16:17.571 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.8%, 32=19.6%, >=64=61.3% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=97.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.7% 00:16:17.571 issued rwts: total=163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640256: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=18, BW=18.1MiB/s (19.0MB/s)(189MiB/10436msec) 00:16:17.571 slat (usec): min=66, max=2103.6k, avg=52913.80, stdev=286936.15 00:16:17.571 clat (msec): min=433, max=9396, avg=2118.43, stdev=2454.50 00:16:17.571 lat (msec): min=437, max=9473, avg=2171.34, stdev=2517.60 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 439], 5.00th=[ 506], 10.00th=[ 592], 20.00th=[ 693], 00:16:17.571 | 30.00th=[ 936], 40.00th=[ 1183], 50.00th=[ 1435], 60.00th=[ 1485], 00:16:17.571 | 70.00th=[ 1569], 80.00th=[ 1720], 90.00th=[ 7819], 95.00th=[ 9329], 00:16:17.571 | 99.00th=[ 9329], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:17.571 | 99.99th=[ 9463] 00:16:17.571 bw ( KiB/s): min=16384, max=110592, per=2.11%, avg=63488.00, stdev=66615.12, samples=2 00:16:17.571 iops : min= 16, max= 108, avg=62.00, stdev=65.05, samples=2 00:16:17.571 lat (msec) : 500=4.23%, 750=17.46%, 1000=9.52%, 2000=53.97%, >=2000=14.81% 00:16:17.571 cpu : usr=0.00%, sys=0.90%, ctx=248, majf=0, minf=32769 00:16:17.571 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.5%, 32=16.9%, >=64=66.7% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:16:17.571 issued rwts: total=189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640257: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=17, BW=17.8MiB/s (18.7MB/s)(224MiB/12553msec) 00:16:17.571 slat (usec): min=448, max=2104.7k, avg=46581.67, stdev=277002.13 00:16:17.571 clat (msec): min=953, max=11573, avg=6755.93, stdev=4795.33 00:16:17.571 lat (msec): min=962, max=11576, avg=6802.51, stdev=4791.22 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 969], 5.00th=[ 986], 10.00th=[ 1028], 20.00th=[ 1083], 00:16:17.571 | 30.00th=[ 1116], 40.00th=[ 3138], 50.00th=[10671], 60.00th=[10939], 00:16:17.571 | 70.00th=[11073], 80.00th=[11342], 90.00th=[11476], 95.00th=[11476], 00:16:17.571 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:16:17.571 | 99.99th=[11610] 00:16:17.571 bw ( KiB/s): min= 1521, max=92160, per=1.10%, avg=33021.50, stdev=42118.09, samples=6 00:16:17.571 iops : min= 1, max= 90, avg=32.17, stdev=41.20, samples=6 00:16:17.571 lat (msec) : 1000=6.25%, 2000=31.70%, >=2000=62.05% 00:16:17.571 cpu : usr=0.01%, sys=0.71%, ctx=398, majf=0, minf=32769 00:16:17.571 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.3%, >=64=71.9% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:16:17.571 issued rwts: total=224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640258: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=4, BW=4796KiB/s (4911kB/s)(49.0MiB/10463msec) 00:16:17.571 slat (usec): min=499, max=2174.3k, avg=212271.82, stdev=602128.73 00:16:17.571 clat (msec): min=61, max=10416, avg=7405.38, stdev=2053.56 00:16:17.571 lat (msec): min=2107, max=10462, avg=7617.65, stdev=1800.61 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 62], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6544], 00:16:17.571 | 30.00th=[ 6544], 40.00th=[ 6544], 50.00th=[ 8423], 60.00th=[ 8557], 00:16:17.571 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10402], 00:16:17.571 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.571 | 99.99th=[10402] 00:16:17.571 lat (msec) : 100=2.04%, >=2000=97.96% 00:16:17.571 cpu : usr=0.00%, sys=0.30%, ctx=54, majf=0, minf=12545 00:16:17.571 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.571 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640259: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=24, BW=24.6MiB/s (25.8MB/s)(309MiB/12559msec) 00:16:17.571 slat (usec): min=62, max=2179.6k, avg=33786.67, stdev=232396.42 00:16:17.571 clat (msec): min=536, max=11403, avg=4923.60, stdev=4423.25 00:16:17.571 lat (msec): min=541, max=11407, avg=4957.38, stdev=4432.65 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 542], 5.00th=[ 550], 10.00th=[ 575], 20.00th=[ 609], 00:16:17.571 | 30.00th=[ 735], 40.00th=[ 827], 50.00th=[ 5269], 60.00th=[ 6074], 00:16:17.571 | 70.00th=[ 6409], 80.00th=[10939], 90.00th=[11208], 95.00th=[11342], 00:16:17.571 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:16:17.571 | 99.99th=[11342] 00:16:17.571 bw ( KiB/s): min= 1503, max=174080, per=1.77%, avg=53170.14, stdev=62261.48, samples=7 00:16:17.571 iops : min= 1, max= 170, avg=51.86, stdev=60.87, samples=7 00:16:17.571 lat (msec) : 750=32.04%, 1000=13.92%, >=2000=54.05% 00:16:17.571 cpu : usr=0.00%, sys=0.74%, ctx=586, majf=0, minf=32769 00:16:17.571 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.6% 00:16:17.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.571 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:17.571 issued rwts: total=309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.571 job0: (groupid=0, jobs=1): err= 0: pid=1640260: Wed Nov 20 11:39:18 2024 00:16:17.571 read: IOPS=3, BW=3587KiB/s (3673kB/s)(44.0MiB/12562msec) 00:16:17.571 slat (usec): min=922, max=2083.3k, avg=237289.13, stdev=638194.16 00:16:17.571 clat (msec): min=2120, max=12558, avg=9517.65, stdev=3263.59 00:16:17.571 lat (msec): min=4204, max=12561, avg=9754.94, stdev=3088.12 00:16:17.571 clat percentiles (msec): 00:16:17.571 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:17.571 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12416], 00:16:17.571 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:17.571 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.571 | 99.99th=[12550] 00:16:17.571 lat (msec) : >=2000=100.00% 00:16:17.571 cpu : usr=0.00%, sys=0.33%, ctx=69, majf=0, minf=11265 00:16:17.571 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.572 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job0: (groupid=0, jobs=1): err= 0: pid=1640261: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=32, BW=32.2MiB/s (33.7MB/s)(326MiB/10132msec) 00:16:17.572 slat (usec): min=54, max=2023.0k, avg=30791.43, stdev=205754.61 00:16:17.572 clat (msec): min=91, max=8668, avg=2688.80, stdev=3167.92 00:16:17.572 lat (msec): min=174, max=8677, avg=2719.59, stdev=3183.06 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 178], 5.00th=[ 279], 10.00th=[ 384], 20.00th=[ 642], 00:16:17.572 | 30.00th=[ 768], 40.00th=[ 894], 50.00th=[ 995], 60.00th=[ 1003], 00:16:17.572 | 70.00th=[ 2869], 80.00th=[ 7080], 90.00th=[ 8557], 95.00th=[ 8658], 00:16:17.572 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:17.572 | 99.99th=[ 8658] 00:16:17.572 bw ( KiB/s): min=12288, max=153600, per=3.37%, avg=101376.00, stdev=61769.07, samples=4 00:16:17.572 iops : min= 12, max= 150, avg=99.00, stdev=60.32, samples=4 00:16:17.572 lat (msec) : 100=0.31%, 250=4.60%, 500=9.82%, 750=11.35%, 1000=24.54% 00:16:17.572 lat (msec) : 2000=19.33%, >=2000=30.06% 00:16:17.572 cpu : usr=0.01%, sys=1.17%, ctx=282, majf=0, minf=32769 00:16:17.572 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.8%, >=64=80.7% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:17.572 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job0: (groupid=0, jobs=1): err= 0: pid=1640262: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=18, BW=18.1MiB/s (19.0MB/s)(190MiB/10470msec) 00:16:17.572 slat (usec): min=43, max=2123.1k, avg=54695.36, stdev=297782.80 00:16:17.572 clat (msec): min=76, max=9860, avg=6601.70, stdev=3460.26 00:16:17.572 lat (msec): min=1082, max=9864, avg=6656.40, stdev=3431.52 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 1083], 5.00th=[ 1133], 10.00th=[ 1167], 20.00th=[ 1569], 00:16:17.572 | 30.00th=[ 3507], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 9194], 00:16:17.572 | 70.00th=[ 9329], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9731], 00:16:17.572 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:16:17.572 | 99.99th=[ 9866] 00:16:17.572 bw ( KiB/s): min= 2048, max=51200, per=0.70%, avg=21162.67, stdev=18844.57, samples=6 00:16:17.572 iops : min= 2, max= 50, avg=20.67, stdev=18.40, samples=6 00:16:17.572 lat (msec) : 100=0.53%, 2000=22.63%, >=2000=76.84% 00:16:17.572 cpu : usr=0.00%, sys=1.00%, ctx=257, majf=0, minf=32769 00:16:17.572 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:16:17.572 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job0: (groupid=0, jobs=1): err= 0: pid=1640263: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=109, BW=110MiB/s (115MB/s)(1148MiB/10467msec) 00:16:17.572 slat (usec): min=48, max=2126.7k, avg=9060.49, stdev=105741.85 00:16:17.572 clat (msec): min=61, max=6751, avg=1081.16, stdev=1954.19 00:16:17.572 lat (msec): min=126, max=6753, avg=1090.22, stdev=1960.46 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 148], 00:16:17.572 | 30.00th=[ 215], 40.00th=[ 251], 50.00th=[ 271], 60.00th=[ 363], 00:16:17.572 | 70.00th=[ 693], 80.00th=[ 978], 90.00th=[ 6409], 95.00th=[ 6611], 00:16:17.572 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:16:17.572 | 99.99th=[ 6745] 00:16:17.572 bw ( KiB/s): min= 2048, max=692224, per=7.71%, avg=232106.67, stdev=255234.95, samples=9 00:16:17.572 iops : min= 2, max= 676, avg=226.67, stdev=249.25, samples=9 00:16:17.572 lat (msec) : 100=0.09%, 250=39.98%, 500=26.74%, 750=3.66%, 1000=10.10% 00:16:17.572 lat (msec) : 2000=7.93%, >=2000=11.50% 00:16:17.572 cpu : usr=0.05%, sys=1.48%, ctx=1248, majf=0, minf=32769 00:16:17.572 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.572 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job0: (groupid=0, jobs=1): err= 0: pid=1640264: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=102, BW=102MiB/s (107MB/s)(1027MiB/10061msec) 00:16:17.572 slat (usec): min=37, max=2054.1k, avg=9738.64, stdev=93622.97 00:16:17.572 clat (msec): min=53, max=6118, avg=909.52, stdev=1125.93 00:16:17.572 lat (msec): min=67, max=6125, avg=919.26, stdev=1138.62 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 79], 5.00th=[ 176], 10.00th=[ 264], 20.00th=[ 288], 00:16:17.572 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 443], 00:16:17.572 | 70.00th=[ 827], 80.00th=[ 1183], 90.00th=[ 2165], 95.00th=[ 4329], 00:16:17.572 | 99.00th=[ 4866], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6141], 00:16:17.572 | 99.99th=[ 6141] 00:16:17.572 bw ( KiB/s): min=61440, max=425984, per=6.80%, avg=204800.00, stdev=148501.18, samples=9 00:16:17.572 iops : min= 60, max= 416, avg=200.00, stdev=145.02, samples=9 00:16:17.572 lat (msec) : 100=1.66%, 250=7.01%, 500=51.61%, 750=1.56%, 1000=13.44% 00:16:17.572 lat (msec) : 2000=7.89%, >=2000=16.85% 00:16:17.572 cpu : usr=0.03%, sys=1.94%, ctx=935, majf=0, minf=32769 00:16:17.572 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.572 issued rwts: total=1027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job0: (groupid=0, jobs=1): err= 0: pid=1640265: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=2, BW=2521KiB/s (2582kB/s)(31.0MiB/12590msec) 00:16:17.572 slat (usec): min=1480, max=2114.9k, avg=338073.96, stdev=749334.21 00:16:17.572 clat (msec): min=2108, max=12584, avg=10078.29, stdev=3252.81 00:16:17.572 lat (msec): min=4203, max=12588, avg=10416.37, stdev=2924.99 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:17.572 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12416], 60.00th=[12416], 00:16:17.572 | 70.00th=[12416], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:17.572 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.572 | 99.99th=[12550] 00:16:17.572 lat (msec) : >=2000=100.00% 00:16:17.572 cpu : usr=0.00%, sys=0.24%, ctx=78, majf=0, minf=7937 00:16:17.572 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:17.572 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job1: (groupid=0, jobs=1): err= 0: pid=1640284: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=76, BW=76.6MiB/s (80.3MB/s)(772MiB/10080msec) 00:16:17.572 slat (usec): min=49, max=2002.8k, avg=12951.82, stdev=107366.55 00:16:17.572 clat (msec): min=78, max=4878, avg=1601.84, stdev=1574.50 00:16:17.572 lat (msec): min=80, max=4884, avg=1614.79, stdev=1578.96 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 176], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 275], 00:16:17.572 | 30.00th=[ 317], 40.00th=[ 575], 50.00th=[ 760], 60.00th=[ 1028], 00:16:17.572 | 70.00th=[ 2735], 80.00th=[ 3272], 90.00th=[ 4111], 95.00th=[ 4732], 00:16:17.572 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:17.572 | 99.99th=[ 4866] 00:16:17.572 bw ( KiB/s): min=10240, max=458752, per=3.65%, avg=110033.42, stdev=127049.71, samples=12 00:16:17.572 iops : min= 10, max= 448, avg=107.42, stdev=124.09, samples=12 00:16:17.572 lat (msec) : 100=0.39%, 250=0.91%, 500=37.44%, 750=10.23%, 1000=10.10% 00:16:17.572 lat (msec) : 2000=8.03%, >=2000=32.90% 00:16:17.572 cpu : usr=0.02%, sys=1.26%, ctx=934, majf=0, minf=32769 00:16:17.572 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.572 issued rwts: total=772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job1: (groupid=0, jobs=1): err= 0: pid=1640285: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=5, BW=5683KiB/s (5820kB/s)(70.0MiB/12612msec) 00:16:17.572 slat (usec): min=702, max=2089.4k, avg=149721.42, stdev=516432.17 00:16:17.572 clat (msec): min=2130, max=12609, avg=11394.23, stdev=2520.98 00:16:17.572 lat (msec): min=4219, max=12610, avg=11543.95, stdev=2260.59 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[10671], 00:16:17.572 | 30.00th=[12416], 40.00th=[12416], 50.00th=[12550], 60.00th=[12550], 00:16:17.572 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:17.572 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.572 | 99.99th=[12550] 00:16:17.572 lat (msec) : >=2000=100.00% 00:16:17.572 cpu : usr=0.00%, sys=0.59%, ctx=96, majf=0, minf=17921 00:16:17.572 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:16:17.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.572 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.572 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.572 job1: (groupid=0, jobs=1): err= 0: pid=1640286: Wed Nov 20 11:39:18 2024 00:16:17.572 read: IOPS=105, BW=105MiB/s (110MB/s)(1061MiB/10089msec) 00:16:17.572 slat (usec): min=50, max=1601.9k, avg=9424.90, stdev=61362.82 00:16:17.572 clat (msec): min=84, max=3035, avg=1022.04, stdev=596.69 00:16:17.572 lat (msec): min=100, max=3038, avg=1031.47, stdev=598.46 00:16:17.572 clat percentiles (msec): 00:16:17.572 | 1.00th=[ 226], 5.00th=[ 259], 10.00th=[ 309], 20.00th=[ 359], 00:16:17.573 | 30.00th=[ 676], 40.00th=[ 810], 50.00th=[ 911], 60.00th=[ 1045], 00:16:17.573 | 70.00th=[ 1250], 80.00th=[ 1569], 90.00th=[ 1871], 95.00th=[ 2039], 00:16:17.573 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3037], 99.95th=[ 3037], 00:16:17.573 | 99.99th=[ 3037] 00:16:17.573 bw ( KiB/s): min=34816, max=344064, per=4.54%, avg=136569.71, stdev=85796.18, samples=14 00:16:17.573 iops : min= 34, max= 336, avg=133.29, stdev=83.71, samples=14 00:16:17.573 lat (msec) : 100=0.09%, 250=4.34%, 500=17.81%, 750=10.08%, 1000=23.47% 00:16:17.573 lat (msec) : 2000=38.27%, >=2000=5.94% 00:16:17.573 cpu : usr=0.05%, sys=1.93%, ctx=1274, majf=0, minf=32769 00:16:17.573 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.573 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640287: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=2, BW=2456KiB/s (2515kB/s)(30.0MiB/12506msec) 00:16:17.573 slat (usec): min=642, max=2089.3k, avg=346184.65, stdev=753081.00 00:16:17.573 clat (msec): min=2119, max=12491, avg=9702.73, stdev=3150.35 00:16:17.573 lat (msec): min=4197, max=12505, avg=10048.92, stdev=2844.06 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:17.573 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12281], 00:16:17.573 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12416], 95.00th=[12550], 00:16:17.573 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.573 | 99.99th=[12550] 00:16:17.573 lat (msec) : >=2000=100.00% 00:16:17.573 cpu : usr=0.00%, sys=0.18%, ctx=61, majf=0, minf=7681 00:16:17.573 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:17.573 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640288: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(207MiB/12619msec) 00:16:17.573 slat (usec): min=47, max=2081.8k, avg=50728.72, stdev=277475.95 00:16:17.573 clat (msec): min=815, max=8855, avg=4419.94, stdev=2455.73 00:16:17.573 lat (msec): min=832, max=8857, avg=4470.67, stdev=2457.87 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 827], 5.00th=[ 844], 10.00th=[ 844], 20.00th=[ 3306], 00:16:17.573 | 30.00th=[ 3540], 40.00th=[ 3641], 50.00th=[ 3742], 60.00th=[ 3910], 00:16:17.573 | 70.00th=[ 4010], 80.00th=[ 7148], 90.00th=[ 8658], 95.00th=[ 8792], 00:16:17.573 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:17.573 | 99.99th=[ 8792] 00:16:17.573 bw ( KiB/s): min= 2048, max=94208, per=1.82%, avg=54658.33, stdev=47447.89, samples=3 00:16:17.573 iops : min= 2, max= 92, avg=53.33, stdev=46.32, samples=3 00:16:17.573 lat (msec) : 1000=13.53%, 2000=0.48%, >=2000=85.99% 00:16:17.573 cpu : usr=0.02%, sys=1.02%, ctx=202, majf=0, minf=32769 00:16:17.573 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.7%, 32=15.5%, >=64=69.6% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:16:17.573 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640289: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=3, BW=3117KiB/s (3192kB/s)(38.0MiB/12484msec) 00:16:17.573 slat (usec): min=875, max=2109.3k, avg=272459.98, stdev=683394.38 00:16:17.573 clat (msec): min=2129, max=12479, avg=10118.54, stdev=3167.84 00:16:17.573 lat (msec): min=4200, max=12483, avg=10391.00, stdev=2895.74 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:17.573 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12281], 60.00th=[12416], 00:16:17.573 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:16:17.573 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:17.573 | 99.99th=[12416] 00:16:17.573 lat (msec) : >=2000=100.00% 00:16:17.573 cpu : usr=0.00%, sys=0.30%, ctx=49, majf=0, minf=9729 00:16:17.573 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.573 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640290: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(277MiB/12487msec) 00:16:17.573 slat (usec): min=640, max=2068.8k, avg=37420.78, stdev=208077.81 00:16:17.573 clat (msec): min=2119, max=10573, avg=5385.50, stdev=1571.80 00:16:17.573 lat (msec): min=2720, max=10689, avg=5422.92, stdev=1584.39 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 2702], 5.00th=[ 2735], 10.00th=[ 3943], 20.00th=[ 4010], 00:16:17.573 | 30.00th=[ 4178], 40.00th=[ 4866], 50.00th=[ 5537], 60.00th=[ 5671], 00:16:17.573 | 70.00th=[ 5940], 80.00th=[ 6409], 90.00th=[ 7819], 95.00th=[ 8020], 00:16:17.573 | 99.00th=[ 8154], 99.50th=[ 8557], 99.90th=[10537], 99.95th=[10537], 00:16:17.573 | 99.99th=[10537] 00:16:17.573 bw ( KiB/s): min= 1656, max=88064, per=1.46%, avg=43829.71, stdev=38146.04, samples=7 00:16:17.573 iops : min= 1, max= 86, avg=42.71, stdev=37.37, samples=7 00:16:17.573 lat (msec) : >=2000=100.00% 00:16:17.573 cpu : usr=0.00%, sys=1.07%, ctx=483, majf=0, minf=32769 00:16:17.573 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:17.573 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640291: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=60, BW=60.1MiB/s (63.0MB/s)(604MiB/10050msec) 00:16:17.573 slat (usec): min=36, max=2030.2k, avg=16581.22, stdev=123182.69 00:16:17.573 clat (msec): min=32, max=5306, avg=1173.85, stdev=902.00 00:16:17.573 lat (msec): min=51, max=5335, avg=1190.43, stdev=919.81 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 59], 5.00th=[ 103], 10.00th=[ 245], 20.00th=[ 584], 00:16:17.573 | 30.00th=[ 676], 40.00th=[ 818], 50.00th=[ 852], 60.00th=[ 953], 00:16:17.573 | 70.00th=[ 1167], 80.00th=[ 2232], 90.00th=[ 2769], 95.00th=[ 2836], 00:16:17.573 | 99.00th=[ 3473], 99.50th=[ 3507], 99.90th=[ 5336], 99.95th=[ 5336], 00:16:17.573 | 99.99th=[ 5336] 00:16:17.573 bw ( KiB/s): min=53248, max=231424, per=4.05%, avg=121837.00, stdev=73244.26, samples=8 00:16:17.573 iops : min= 52, max= 226, avg=118.88, stdev=71.60, samples=8 00:16:17.573 lat (msec) : 50=0.17%, 100=4.47%, 250=5.96%, 500=5.79%, 750=17.55% 00:16:17.573 lat (msec) : 1000=27.98%, 2000=15.40%, >=2000=22.68% 00:16:17.573 cpu : usr=0.03%, sys=1.35%, ctx=828, majf=0, minf=32769 00:16:17.573 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.573 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640292: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=6, BW=7024KiB/s (7192kB/s)(86.0MiB/12538msec) 00:16:17.573 slat (usec): min=485, max=2090.2k, avg=121012.58, stdev=452387.23 00:16:17.573 clat (msec): min=2130, max=12527, avg=8142.18, stdev=2830.05 00:16:17.573 lat (msec): min=4211, max=12537, avg=8263.20, stdev=2792.22 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6074], 20.00th=[ 6141], 00:16:17.573 | 30.00th=[ 6208], 40.00th=[ 6275], 50.00th=[ 6342], 60.00th=[ 8423], 00:16:17.573 | 70.00th=[10671], 80.00th=[12416], 90.00th=[12550], 95.00th=[12550], 00:16:17.573 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.573 | 99.99th=[12550] 00:16:17.573 lat (msec) : >=2000=100.00% 00:16:17.573 cpu : usr=0.00%, sys=0.53%, ctx=110, majf=0, minf=22017 00:16:17.573 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.573 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640293: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=31, BW=32.0MiB/s (33.5MB/s)(322MiB/10070msec) 00:16:17.573 slat (usec): min=52, max=2070.2k, avg=31061.78, stdev=201945.66 00:16:17.573 clat (msec): min=66, max=6040, avg=2555.46, stdev=1877.41 00:16:17.573 lat (msec): min=69, max=6051, avg=2586.53, stdev=1888.63 00:16:17.573 clat percentiles (msec): 00:16:17.573 | 1.00th=[ 74], 5.00th=[ 211], 10.00th=[ 305], 20.00th=[ 567], 00:16:17.573 | 30.00th=[ 885], 40.00th=[ 1334], 50.00th=[ 1502], 60.00th=[ 4111], 00:16:17.573 | 70.00th=[ 4396], 80.00th=[ 4597], 90.00th=[ 4799], 95.00th=[ 4933], 00:16:17.573 | 99.00th=[ 4933], 99.50th=[ 6007], 99.90th=[ 6074], 99.95th=[ 6074], 00:16:17.573 | 99.99th=[ 6074] 00:16:17.573 bw ( KiB/s): min=30720, max=126976, per=2.21%, avg=66560.00, stdev=36698.65, samples=6 00:16:17.573 iops : min= 30, max= 124, avg=65.00, stdev=35.84, samples=6 00:16:17.573 lat (msec) : 100=1.24%, 250=6.52%, 500=10.56%, 750=8.70%, 1000=5.28% 00:16:17.573 lat (msec) : 2000=18.32%, >=2000=49.38% 00:16:17.573 cpu : usr=0.01%, sys=1.01%, ctx=367, majf=0, minf=32769 00:16:17.573 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.4% 00:16:17.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.573 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:17.573 issued rwts: total=322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.573 job1: (groupid=0, jobs=1): err= 0: pid=1640294: Wed Nov 20 11:39:18 2024 00:16:17.573 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(702MiB/12575msec) 00:16:17.573 slat (usec): min=54, max=2112.5k, avg=14899.98, stdev=125238.49 00:16:17.574 clat (msec): min=405, max=5252, avg=1664.04, stdev=1669.80 00:16:17.574 lat (msec): min=409, max=5256, avg=1678.94, stdev=1677.30 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 409], 5.00th=[ 418], 10.00th=[ 468], 20.00th=[ 550], 00:16:17.574 | 30.00th=[ 575], 40.00th=[ 634], 50.00th=[ 835], 60.00th=[ 1083], 00:16:17.574 | 70.00th=[ 1183], 80.00th=[ 4329], 90.00th=[ 4665], 95.00th=[ 4799], 00:16:17.574 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:17.574 | 99.99th=[ 5269] 00:16:17.574 bw ( KiB/s): min= 1460, max=305152, per=4.89%, avg=147126.50, stdev=96550.13, samples=8 00:16:17.574 iops : min= 1, max= 298, avg=143.62, stdev=94.38, samples=8 00:16:17.574 lat (msec) : 500=11.40%, 750=34.62%, 1000=11.11%, 2000=19.09%, >=2000=23.79% 00:16:17.574 cpu : usr=0.01%, sys=1.34%, ctx=954, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.574 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job1: (groupid=0, jobs=1): err= 0: pid=1640295: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=14, BW=14.2MiB/s (14.9MB/s)(143MiB/10048msec) 00:16:17.574 slat (usec): min=576, max=2025.7k, avg=70191.35, stdev=313166.36 00:16:17.574 clat (msec): min=9, max=9775, avg=2068.51, stdev=2332.74 00:16:17.574 lat (msec): min=64, max=9890, avg=2138.70, stdev=2418.47 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 65], 5.00th=[ 83], 10.00th=[ 153], 20.00th=[ 384], 00:16:17.574 | 30.00th=[ 667], 40.00th=[ 944], 50.00th=[ 1267], 60.00th=[ 1552], 00:16:17.574 | 70.00th=[ 1804], 80.00th=[ 3842], 90.00th=[ 6007], 95.00th=[ 8087], 00:16:17.574 | 99.00th=[ 8154], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:16:17.574 | 99.99th=[ 9731] 00:16:17.574 bw ( KiB/s): min=30720, max=30720, per=1.02%, avg=30720.00, stdev= 0.00, samples=1 00:16:17.574 iops : min= 30, max= 30, avg=30.00, stdev= 0.00, samples=1 00:16:17.574 lat (msec) : 10=0.70%, 100=6.29%, 250=6.99%, 500=9.09%, 750=9.09% 00:16:17.574 lat (msec) : 1000=9.79%, 2000=36.36%, >=2000=21.68% 00:16:17.574 cpu : usr=0.00%, sys=0.99%, ctx=303, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.2%, 32=22.4%, >=64=55.9% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.9% 00:16:17.574 issued rwts: total=143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job1: (groupid=0, jobs=1): err= 0: pid=1640296: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=10, BW=11.0MiB/s (11.5MB/s)(139MiB/12642msec) 00:16:17.574 slat (usec): min=469, max=2043.3k, avg=75759.46, stdev=329969.85 00:16:17.574 clat (msec): min=2110, max=12620, avg=7062.35, stdev=3459.96 00:16:17.574 lat (msec): min=3216, max=12623, avg=7138.11, stdev=3465.92 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 3205], 5.00th=[ 3306], 10.00th=[ 3406], 20.00th=[ 3675], 00:16:17.574 | 30.00th=[ 3943], 40.00th=[ 4212], 50.00th=[ 6409], 60.00th=[ 8423], 00:16:17.574 | 70.00th=[ 9194], 80.00th=[10671], 90.00th=[12550], 95.00th=[12550], 00:16:17.574 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:17.574 | 99.99th=[12684] 00:16:17.574 bw ( KiB/s): min= 2052, max=22528, per=0.41%, avg=12290.00, stdev=14478.72, samples=2 00:16:17.574 iops : min= 2, max= 22, avg=12.00, stdev=14.14, samples=2 00:16:17.574 lat (msec) : >=2000=100.00% 00:16:17.574 cpu : usr=0.02%, sys=0.93%, ctx=255, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.5%, 32=23.0%, >=64=54.7% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=92.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.7% 00:16:17.574 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job2: (groupid=0, jobs=1): err= 0: pid=1640307: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=68, BW=68.8MiB/s (72.1MB/s)(718MiB/10439msec) 00:16:17.574 slat (usec): min=43, max=2107.9k, avg=14420.51, stdev=107006.79 00:16:17.574 clat (msec): min=81, max=3437, avg=1706.14, stdev=1000.01 00:16:17.574 lat (msec): min=543, max=3474, avg=1720.56, stdev=999.35 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 550], 5.00th=[ 584], 10.00th=[ 625], 20.00th=[ 743], 00:16:17.574 | 30.00th=[ 911], 40.00th=[ 1183], 50.00th=[ 1351], 60.00th=[ 1519], 00:16:17.574 | 70.00th=[ 2668], 80.00th=[ 3037], 90.00th=[ 3171], 95.00th=[ 3239], 00:16:17.574 | 99.00th=[ 3406], 99.50th=[ 3406], 99.90th=[ 3440], 99.95th=[ 3440], 00:16:17.574 | 99.99th=[ 3440] 00:16:17.574 bw ( KiB/s): min=16384, max=218698, per=3.34%, avg=100643.83, stdev=54733.47, samples=12 00:16:17.574 iops : min= 16, max= 213, avg=98.17, stdev=53.37, samples=12 00:16:17.574 lat (msec) : 100=0.14%, 750=20.61%, 1000=14.76%, 2000=29.11%, >=2000=35.38% 00:16:17.574 cpu : usr=0.02%, sys=1.65%, ctx=1066, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.574 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job2: (groupid=0, jobs=1): err= 0: pid=1640308: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=44, BW=44.1MiB/s (46.2MB/s)(465MiB/10553msec) 00:16:17.574 slat (usec): min=47, max=2107.9k, avg=22511.41, stdev=132494.72 00:16:17.574 clat (msec): min=82, max=4339, avg=2680.84, stdev=1083.12 00:16:17.574 lat (msec): min=1241, max=4370, avg=2703.35, stdev=1079.96 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 1234], 5.00th=[ 1301], 10.00th=[ 1334], 20.00th=[ 1536], 00:16:17.574 | 30.00th=[ 1770], 40.00th=[ 2140], 50.00th=[ 2534], 60.00th=[ 3004], 00:16:17.574 | 70.00th=[ 3608], 80.00th=[ 4010], 90.00th=[ 4178], 95.00th=[ 4279], 00:16:17.574 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:16:17.574 | 99.99th=[ 4329] 00:16:17.574 bw ( KiB/s): min=12263, max=135168, per=2.08%, avg=62743.91, stdev=32160.16, samples=11 00:16:17.574 iops : min= 11, max= 132, avg=61.09, stdev=31.61, samples=11 00:16:17.574 lat (msec) : 100=0.22%, 2000=35.91%, >=2000=63.87% 00:16:17.574 cpu : usr=0.02%, sys=1.40%, ctx=1074, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:17.574 issued rwts: total=465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job2: (groupid=0, jobs=1): err= 0: pid=1640309: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=50, BW=50.5MiB/s (52.9MB/s)(526MiB/10425msec) 00:16:17.574 slat (usec): min=73, max=2137.6k, avg=19657.08, stdev=143656.33 00:16:17.574 clat (msec): min=81, max=4768, avg=1650.51, stdev=906.72 00:16:17.574 lat (msec): min=799, max=4916, avg=1670.16, stdev=914.45 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 802], 5.00th=[ 818], 10.00th=[ 852], 20.00th=[ 978], 00:16:17.574 | 30.00th=[ 1045], 40.00th=[ 1099], 50.00th=[ 1267], 60.00th=[ 1401], 00:16:17.574 | 70.00th=[ 2232], 80.00th=[ 2635], 90.00th=[ 2970], 95.00th=[ 3104], 00:16:17.574 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:16:17.574 | 99.99th=[ 4799] 00:16:17.574 bw ( KiB/s): min=43008, max=163840, per=3.38%, avg=101888.00, stdev=48880.01, samples=8 00:16:17.574 iops : min= 42, max= 160, avg=99.50, stdev=47.73, samples=8 00:16:17.574 lat (msec) : 100=0.19%, 1000=24.33%, 2000=44.87%, >=2000=30.61% 00:16:17.574 cpu : usr=0.02%, sys=1.16%, ctx=896, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.574 issued rwts: total=526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job2: (groupid=0, jobs=1): err= 0: pid=1640310: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=17, BW=18.0MiB/s (18.8MB/s)(190MiB/10573msec) 00:16:17.574 slat (usec): min=447, max=2089.1k, avg=55208.04, stdev=273236.56 00:16:17.574 clat (msec): min=82, max=10407, avg=6442.08, stdev=2267.35 00:16:17.574 lat (msec): min=2140, max=10474, avg=6497.29, stdev=2231.47 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 2140], 5.00th=[ 3540], 10.00th=[ 3675], 20.00th=[ 3943], 00:16:17.574 | 30.00th=[ 4212], 40.00th=[ 6208], 50.00th=[ 6342], 60.00th=[ 6477], 00:16:17.574 | 70.00th=[ 8557], 80.00th=[ 9060], 90.00th=[ 9463], 95.00th=[ 9463], 00:16:17.574 | 99.00th=[ 9597], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.574 | 99.99th=[10402] 00:16:17.574 bw ( KiB/s): min= 2027, max=73728, per=0.71%, avg=21500.50, stdev=27563.60, samples=6 00:16:17.574 iops : min= 1, max= 72, avg=20.83, stdev=27.06, samples=6 00:16:17.574 lat (msec) : 100=0.53%, >=2000=99.47% 00:16:17.574 cpu : usr=0.01%, sys=1.14%, ctx=539, majf=0, minf=32769 00:16:17.574 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:16:17.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.574 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:16:17.574 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.574 job2: (groupid=0, jobs=1): err= 0: pid=1640311: Wed Nov 20 11:39:18 2024 00:16:17.574 read: IOPS=68, BW=68.6MiB/s (71.9MB/s)(687MiB/10015msec) 00:16:17.574 slat (usec): min=72, max=1802.7k, avg=14550.87, stdev=92862.86 00:16:17.574 clat (msec): min=13, max=5276, avg=1665.78, stdev=1607.84 00:16:17.574 lat (msec): min=24, max=5285, avg=1680.33, stdev=1614.67 00:16:17.574 clat percentiles (msec): 00:16:17.574 | 1.00th=[ 40], 5.00th=[ 209], 10.00th=[ 426], 20.00th=[ 726], 00:16:17.574 | 30.00th=[ 844], 40.00th=[ 877], 50.00th=[ 927], 60.00th=[ 1045], 00:16:17.574 | 70.00th=[ 1334], 80.00th=[ 3205], 90.00th=[ 5134], 95.00th=[ 5134], 00:16:17.574 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:16:17.574 | 99.99th=[ 5269] 00:16:17.574 bw ( KiB/s): min= 6144, max=167936, per=2.96%, avg=88994.91, stdev=54296.06, samples=11 00:16:17.574 iops : min= 6, max= 164, avg=86.91, stdev=53.02, samples=11 00:16:17.575 lat (msec) : 20=0.15%, 50=1.75%, 100=0.58%, 250=3.49%, 500=5.53% 00:16:17.575 lat (msec) : 750=10.77%, 1000=35.81%, 2000=19.80%, >=2000=22.13% 00:16:17.575 cpu : usr=0.01%, sys=1.65%, ctx=1095, majf=0, minf=32769 00:16:17.575 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.575 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640312: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=20, BW=20.4MiB/s (21.4MB/s)(256MiB/12540msec) 00:16:17.575 slat (usec): min=102, max=2099.9k, avg=40706.50, stdev=249042.69 00:16:17.575 clat (msec): min=911, max=12476, avg=5918.63, stdev=4086.91 00:16:17.575 lat (msec): min=933, max=12491, avg=5959.34, stdev=4090.74 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 944], 5.00th=[ 969], 10.00th=[ 1003], 20.00th=[ 1053], 00:16:17.575 | 30.00th=[ 2123], 40.00th=[ 3943], 50.00th=[ 5269], 60.00th=[ 7483], 00:16:17.575 | 70.00th=[ 8490], 80.00th=[11073], 90.00th=[11208], 95.00th=[11208], 00:16:17.575 | 99.00th=[11342], 99.50th=[11342], 99.90th=[12416], 99.95th=[12416], 00:16:17.575 | 99.99th=[12416] 00:16:17.575 bw ( KiB/s): min= 1532, max=116736, per=1.09%, avg=32959.50, stdev=36949.66, samples=8 00:16:17.575 iops : min= 1, max= 114, avg=32.13, stdev=36.14, samples=8 00:16:17.575 lat (msec) : 1000=9.77%, 2000=19.92%, >=2000=70.31% 00:16:17.575 cpu : usr=0.02%, sys=1.01%, ctx=343, majf=0, minf=32769 00:16:17.575 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.2%, 32=12.5%, >=64=75.4% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:17.575 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640313: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=7, BW=8133KiB/s (8328kB/s)(100MiB/12591msec) 00:16:17.575 slat (usec): min=549, max=2148.8k, avg=104741.07, stdev=414467.42 00:16:17.575 clat (msec): min=2115, max=12586, avg=8811.97, stdev=3117.73 00:16:17.575 lat (msec): min=4191, max=12589, avg=8916.71, stdev=3066.01 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 5873], 20.00th=[ 5940], 00:16:17.575 | 30.00th=[ 6074], 40.00th=[ 6208], 50.00th=[ 8490], 60.00th=[10671], 00:16:17.575 | 70.00th=[12281], 80.00th=[12416], 90.00th=[12550], 95.00th=[12550], 00:16:17.575 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.575 | 99.99th=[12550] 00:16:17.575 lat (msec) : >=2000=100.00% 00:16:17.575 cpu : usr=0.00%, sys=0.71%, ctx=155, majf=0, minf=25601 00:16:17.575 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.575 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640315: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=15, BW=16.0MiB/s (16.8MB/s)(200MiB/12513msec) 00:16:17.575 slat (usec): min=661, max=2093.5k, avg=51943.12, stdev=267994.53 00:16:17.575 clat (msec): min=1519, max=7669, avg=4181.99, stdev=1668.36 00:16:17.575 lat (msec): min=1525, max=7713, avg=4233.94, stdev=1669.26 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 1519], 5.00th=[ 1569], 10.00th=[ 1586], 20.00th=[ 1770], 00:16:17.575 | 30.00th=[ 4044], 40.00th=[ 4329], 50.00th=[ 4463], 60.00th=[ 4799], 00:16:17.575 | 70.00th=[ 5067], 80.00th=[ 5336], 90.00th=[ 5537], 95.00th=[ 7617], 00:16:17.575 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:16:17.575 | 99.99th=[ 7684] 00:16:17.575 bw ( KiB/s): min= 1595, max=88064, per=1.24%, avg=37262.75, stdev=42803.00, samples=4 00:16:17.575 iops : min= 1, max= 86, avg=36.25, stdev=41.96, samples=4 00:16:17.575 lat (msec) : 2000=24.50%, >=2000=75.50% 00:16:17.575 cpu : usr=0.00%, sys=0.84%, ctx=482, majf=0, minf=32769 00:16:17.575 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:16:17.575 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640316: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=105, BW=106MiB/s (111MB/s)(1109MiB/10477msec) 00:16:17.575 slat (usec): min=39, max=2107.9k, avg=9369.85, stdev=98024.06 00:16:17.575 clat (msec): min=82, max=4187, avg=1163.14, stdev=1183.84 00:16:17.575 lat (msec): min=239, max=4189, avg=1172.51, stdev=1186.71 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 255], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 309], 00:16:17.575 | 30.00th=[ 405], 40.00th=[ 498], 50.00th=[ 642], 60.00th=[ 751], 00:16:17.575 | 70.00th=[ 1020], 80.00th=[ 2400], 90.00th=[ 2802], 95.00th=[ 4111], 00:16:17.575 | 99.00th=[ 4178], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:16:17.575 | 99.99th=[ 4178] 00:16:17.575 bw ( KiB/s): min=14336, max=450560, per=5.56%, avg=167424.00, stdev=134863.35, samples=12 00:16:17.575 iops : min= 14, max= 440, avg=163.50, stdev=131.70, samples=12 00:16:17.575 lat (msec) : 100=0.09%, 250=0.27%, 500=39.68%, 750=20.74%, 1000=8.21% 00:16:17.575 lat (msec) : 2000=8.12%, >=2000=22.90% 00:16:17.575 cpu : usr=0.03%, sys=1.68%, ctx=1181, majf=0, minf=32770 00:16:17.575 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.575 issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640317: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=37, BW=37.0MiB/s (38.8MB/s)(376MiB/10153msec) 00:16:17.575 slat (usec): min=44, max=1996.2k, avg=26744.01, stdev=166507.72 00:16:17.575 clat (msec): min=94, max=5833, avg=2108.02, stdev=1570.65 00:16:17.575 lat (msec): min=171, max=5848, avg=2134.76, stdev=1584.11 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 174], 5.00th=[ 275], 10.00th=[ 393], 20.00th=[ 818], 00:16:17.575 | 30.00th=[ 1020], 40.00th=[ 1083], 50.00th=[ 1167], 60.00th=[ 3037], 00:16:17.575 | 70.00th=[ 3406], 80.00th=[ 3473], 90.00th=[ 3742], 95.00th=[ 5671], 00:16:17.575 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:16:17.575 | 99.99th=[ 5805] 00:16:17.575 bw ( KiB/s): min=40960, max=124928, per=2.81%, avg=84650.67, stdev=42024.77, samples=6 00:16:17.575 iops : min= 40, max= 122, avg=82.67, stdev=41.04, samples=6 00:16:17.575 lat (msec) : 100=0.27%, 250=3.99%, 500=7.45%, 750=8.24%, 1000=7.98% 00:16:17.575 lat (msec) : 2000=27.13%, >=2000=44.95% 00:16:17.575 cpu : usr=0.01%, sys=1.14%, ctx=609, majf=0, minf=32769 00:16:17.575 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.5%, >=64=83.2% 00:16:17.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.575 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:17.575 issued rwts: total=376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.575 job2: (groupid=0, jobs=1): err= 0: pid=1640318: Wed Nov 20 11:39:18 2024 00:16:17.575 read: IOPS=8, BW=8347KiB/s (8547kB/s)(103MiB/12636msec) 00:16:17.575 slat (usec): min=654, max=2100.3k, avg=102091.32, stdev=431500.12 00:16:17.575 clat (msec): min=2120, max=12632, avg=10965.52, stdev=2708.21 00:16:17.575 lat (msec): min=4189, max=12635, avg=11067.61, stdev=2565.96 00:16:17.575 clat percentiles (msec): 00:16:17.575 | 1.00th=[ 4178], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:17.575 | 30.00th=[10671], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:16:17.575 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:16:17.576 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:17.576 | 99.99th=[12684] 00:16:17.576 lat (msec) : >=2000=100.00% 00:16:17.576 cpu : usr=0.02%, sys=0.80%, ctx=95, majf=0, minf=26369 00:16:17.576 IO depths : 1=1.0%, 2=1.9%, 4=3.9%, 8=7.8%, 16=15.5%, 32=31.1%, >=64=38.8% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.576 issued rwts: total=103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job2: (groupid=0, jobs=1): err= 0: pid=1640319: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=36, BW=36.7MiB/s (38.5MB/s)(386MiB/10513msec) 00:16:17.576 slat (usec): min=496, max=2001.7k, avg=25903.59, stdev=129307.66 00:16:17.576 clat (msec): min=512, max=6218, avg=2481.38, stdev=1363.72 00:16:17.576 lat (msec): min=521, max=6219, avg=2507.28, stdev=1373.89 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 575], 5.00th=[ 894], 10.00th=[ 1099], 20.00th=[ 1670], 00:16:17.576 | 30.00th=[ 2039], 40.00th=[ 2089], 50.00th=[ 2232], 60.00th=[ 2299], 00:16:17.576 | 70.00th=[ 2333], 80.00th=[ 2433], 90.00th=[ 5537], 95.00th=[ 5805], 00:16:17.576 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:16:17.576 | 99.99th=[ 6208] 00:16:17.576 bw ( KiB/s): min=34816, max=69632, per=1.96%, avg=58922.78, stdev=11200.60, samples=9 00:16:17.576 iops : min= 34, max= 68, avg=57.44, stdev=10.90, samples=9 00:16:17.576 lat (msec) : 750=3.37%, 1000=4.92%, 2000=17.10%, >=2000=74.61% 00:16:17.576 cpu : usr=0.01%, sys=1.40%, ctx=1058, majf=0, minf=32769 00:16:17.576 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.3%, >=64=83.7% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:17.576 issued rwts: total=386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job2: (groupid=0, jobs=1): err= 0: pid=1640320: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=62, BW=62.8MiB/s (65.8MB/s)(635MiB/10115msec) 00:16:17.576 slat (usec): min=44, max=1924.7k, avg=15762.34, stdev=86611.49 00:16:17.576 clat (msec): min=102, max=4622, avg=1629.75, stdev=1083.01 00:16:17.576 lat (msec): min=114, max=4632, avg=1645.51, stdev=1088.13 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 192], 5.00th=[ 558], 10.00th=[ 768], 20.00th=[ 944], 00:16:17.576 | 30.00th=[ 1028], 40.00th=[ 1116], 50.00th=[ 1418], 60.00th=[ 1586], 00:16:17.576 | 70.00th=[ 1670], 80.00th=[ 1787], 90.00th=[ 4212], 95.00th=[ 4463], 00:16:17.576 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:16:17.576 | 99.99th=[ 4597] 00:16:17.576 bw ( KiB/s): min= 6144, max=143073, per=2.87%, avg=86488.33, stdev=40178.99, samples=12 00:16:17.576 iops : min= 6, max= 139, avg=84.33, stdev=39.13, samples=12 00:16:17.576 lat (msec) : 250=1.57%, 500=2.99%, 750=4.57%, 1000=16.85%, 2000=61.26% 00:16:17.576 lat (msec) : >=2000=12.76% 00:16:17.576 cpu : usr=0.03%, sys=1.47%, ctx=1387, majf=0, minf=32769 00:16:17.576 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.0%, >=64=90.1% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.576 issued rwts: total=635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640326: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=6, BW=6813KiB/s (6977kB/s)(84.0MiB/12625msec) 00:16:17.576 slat (usec): min=653, max=2083.8k, avg=125037.55, stdev=472483.95 00:16:17.576 clat (msec): min=2121, max=12623, avg=10010.20, stdev=3635.60 00:16:17.576 lat (msec): min=4115, max=12624, avg=10135.24, stdev=3540.38 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 2123], 5.00th=[ 4144], 10.00th=[ 4245], 20.00th=[ 4245], 00:16:17.576 | 30.00th=[ 8490], 40.00th=[12416], 50.00th=[12550], 60.00th=[12550], 00:16:17.576 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:16:17.576 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:17.576 | 99.99th=[12684] 00:16:17.576 lat (msec) : >=2000=100.00% 00:16:17.576 cpu : usr=0.01%, sys=0.71%, ctx=109, majf=0, minf=21505 00:16:17.576 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.576 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640327: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=5, BW=5410KiB/s (5540kB/s)(66.0MiB/12493msec) 00:16:17.576 slat (usec): min=650, max=2084.6k, avg=157126.35, stdev=512890.69 00:16:17.576 clat (msec): min=2121, max=12471, avg=7638.52, stdev=2182.02 00:16:17.576 lat (msec): min=4198, max=12491, avg=7795.65, stdev=2151.80 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6208], 20.00th=[ 6208], 00:16:17.576 | 30.00th=[ 6275], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8356], 00:16:17.576 | 70.00th=[ 8490], 80.00th=[ 8490], 90.00th=[10671], 95.00th=[12416], 00:16:17.576 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:17.576 | 99.99th=[12416] 00:16:17.576 lat (msec) : >=2000=100.00% 00:16:17.576 cpu : usr=0.00%, sys=0.45%, ctx=93, majf=0, minf=16897 00:16:17.576 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.576 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640328: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=5, BW=5503KiB/s (5635kB/s)(56.0MiB/10421msec) 00:16:17.576 slat (usec): min=474, max=2099.9k, avg=184616.36, stdev=559254.63 00:16:17.576 clat (msec): min=81, max=10409, avg=5829.99, stdev=2616.77 00:16:17.576 lat (msec): min=2128, max=10420, avg=6014.61, stdev=2568.09 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 82], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 2165], 00:16:17.576 | 30.00th=[ 4329], 40.00th=[ 6275], 50.00th=[ 6275], 60.00th=[ 6342], 00:16:17.576 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10402], 00:16:17.576 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.576 | 99.99th=[10402] 00:16:17.576 lat (msec) : 100=1.79%, >=2000=98.21% 00:16:17.576 cpu : usr=0.00%, sys=0.37%, ctx=85, majf=0, minf=14337 00:16:17.576 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.576 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640329: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=4, BW=5029KiB/s (5150kB/s)(62.0MiB/12624msec) 00:16:17.576 slat (usec): min=772, max=2108.8k, avg=169388.42, stdev=547912.76 00:16:17.576 clat (msec): min=2121, max=12622, avg=10220.35, stdev=2841.01 00:16:17.576 lat (msec): min=4187, max=12623, avg=10389.74, stdev=2657.37 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8557], 00:16:17.576 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[12416], 60.00th=[12550], 00:16:17.576 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12684], 00:16:17.576 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:17.576 | 99.99th=[12684] 00:16:17.576 lat (msec) : >=2000=100.00% 00:16:17.576 cpu : usr=0.00%, sys=0.51%, ctx=90, majf=0, minf=15873 00:16:17.576 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.576 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640330: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=51, BW=52.0MiB/s (54.5MB/s)(523MiB/10060msec) 00:16:17.576 slat (usec): min=49, max=1942.7k, avg=19126.29, stdev=144012.31 00:16:17.576 clat (msec): min=53, max=6053, avg=1930.16, stdev=2030.25 00:16:17.576 lat (msec): min=60, max=6090, avg=1949.29, stdev=2036.00 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 83], 5.00th=[ 255], 10.00th=[ 397], 20.00th=[ 502], 00:16:17.576 | 30.00th=[ 531], 40.00th=[ 760], 50.00th=[ 869], 60.00th=[ 911], 00:16:17.576 | 70.00th=[ 2735], 80.00th=[ 3876], 90.00th=[ 5940], 95.00th=[ 5940], 00:16:17.576 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:16:17.576 | 99.99th=[ 6074] 00:16:17.576 bw ( KiB/s): min= 8192, max=200704, per=2.99%, avg=90112.00, stdev=74238.23, samples=9 00:16:17.576 iops : min= 8, max= 196, avg=88.00, stdev=72.50, samples=9 00:16:17.576 lat (msec) : 100=1.53%, 250=3.06%, 500=14.72%, 750=19.69%, 1000=26.77% 00:16:17.576 lat (msec) : 2000=2.68%, >=2000=31.55% 00:16:17.576 cpu : usr=0.04%, sys=1.45%, ctx=505, majf=0, minf=32769 00:16:17.576 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:16:17.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.576 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:17.576 issued rwts: total=523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.576 job3: (groupid=0, jobs=1): err= 0: pid=1640331: Wed Nov 20 11:39:18 2024 00:16:17.576 read: IOPS=13, BW=13.9MiB/s (14.5MB/s)(145MiB/10450msec) 00:16:17.576 slat (usec): min=119, max=2115.9k, avg=71476.92, stdev=347344.86 00:16:17.576 clat (msec): min=84, max=8559, avg=7367.61, stdev=1983.26 00:16:17.576 lat (msec): min=2136, max=8560, avg=7439.09, stdev=1859.34 00:16:17.576 clat percentiles (msec): 00:16:17.576 | 1.00th=[ 2140], 5.00th=[ 2198], 10.00th=[ 3809], 20.00th=[ 6477], 00:16:17.576 | 30.00th=[ 8087], 40.00th=[ 8154], 50.00th=[ 8221], 60.00th=[ 8288], 00:16:17.576 | 70.00th=[ 8356], 80.00th=[ 8423], 90.00th=[ 8490], 95.00th=[ 8490], 00:16:17.576 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:16:17.576 | 99.99th=[ 8557] 00:16:17.577 bw ( KiB/s): min= 4096, max=10240, per=0.23%, avg=6963.20, stdev=2335.08, samples=5 00:16:17.577 iops : min= 4, max= 10, avg= 6.80, stdev= 2.28, samples=5 00:16:17.577 lat (msec) : 100=0.69%, >=2000=99.31% 00:16:17.577 cpu : usr=0.02%, sys=1.06%, ctx=131, majf=0, minf=32769 00:16:17.577 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.5%, 16=11.0%, 32=22.1%, >=64=56.6% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=94.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.3% 00:16:17.577 issued rwts: total=145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640332: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=3, BW=3093KiB/s (3167kB/s)(38.0MiB/12581msec) 00:16:17.577 slat (usec): min=885, max=2070.4k, avg=275429.41, stdev=671163.46 00:16:17.577 clat (msec): min=2113, max=12578, avg=9358.01, stdev=3233.79 00:16:17.577 lat (msec): min=4177, max=12580, avg=9633.44, stdev=3040.02 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:17.577 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:16:17.577 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:17.577 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.577 | 99.99th=[12550] 00:16:17.577 lat (msec) : >=2000=100.00% 00:16:17.577 cpu : usr=0.00%, sys=0.31%, ctx=80, majf=0, minf=9729 00:16:17.577 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.577 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640333: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=5, BW=5890KiB/s (6031kB/s)(60.0MiB/10431msec) 00:16:17.577 slat (usec): min=514, max=2100.4k, avg=172437.22, stdev=545026.35 00:16:17.577 clat (msec): min=84, max=10426, avg=7417.97, stdev=2468.92 00:16:17.577 lat (msec): min=2135, max=10430, avg=7590.40, stdev=2303.81 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 85], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:17.577 | 30.00th=[ 6544], 40.00th=[ 6544], 50.00th=[ 8490], 60.00th=[ 8490], 00:16:17.577 | 70.00th=[ 8658], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:16:17.577 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.577 | 99.99th=[10402] 00:16:17.577 lat (msec) : 100=1.67%, >=2000=98.33% 00:16:17.577 cpu : usr=0.00%, sys=0.44%, ctx=73, majf=0, minf=15361 00:16:17.577 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.577 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640334: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=80, BW=80.7MiB/s (84.6MB/s)(810MiB/10039msec) 00:16:17.577 slat (usec): min=49, max=2068.0k, avg=12342.18, stdev=103719.01 00:16:17.577 clat (msec): min=37, max=4574, avg=924.52, stdev=785.48 00:16:17.577 lat (msec): min=38, max=4610, avg=936.86, stdev=798.78 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 56], 5.00th=[ 414], 10.00th=[ 481], 20.00th=[ 493], 00:16:17.577 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 542], 60.00th=[ 642], 00:16:17.577 | 70.00th=[ 693], 80.00th=[ 1418], 90.00th=[ 2299], 95.00th=[ 2869], 00:16:17.577 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 4597], 99.95th=[ 4597], 00:16:17.577 | 99.99th=[ 4597] 00:16:17.577 bw ( KiB/s): min=40960, max=270336, per=5.42%, avg=163328.00, stdev=97537.58, samples=8 00:16:17.577 iops : min= 40, max= 264, avg=159.50, stdev=95.25, samples=8 00:16:17.577 lat (msec) : 50=0.86%, 100=0.74%, 250=2.35%, 500=19.63%, 750=50.37% 00:16:17.577 lat (msec) : 1000=2.59%, 2000=9.63%, >=2000=13.83% 00:16:17.577 cpu : usr=0.08%, sys=1.57%, ctx=913, majf=0, minf=32769 00:16:17.577 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.577 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640335: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=7, BW=7879KiB/s (8068kB/s)(97.0MiB/12607msec) 00:16:17.577 slat (usec): min=572, max=2079.8k, avg=108156.82, stdev=435820.45 00:16:17.577 clat (msec): min=2114, max=12603, avg=11188.14, stdev=2168.13 00:16:17.577 lat (msec): min=4175, max=12605, avg=11296.30, stdev=1962.73 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 8557], 20.00th=[10537], 00:16:17.577 | 30.00th=[10671], 40.00th=[10671], 50.00th=[12416], 60.00th=[12550], 00:16:17.577 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:17.577 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.577 | 99.99th=[12550] 00:16:17.577 lat (msec) : >=2000=100.00% 00:16:17.577 cpu : usr=0.01%, sys=0.78%, ctx=111, majf=0, minf=24833 00:16:17.577 IO depths : 1=1.0%, 2=2.1%, 4=4.1%, 8=8.2%, 16=16.5%, 32=33.0%, >=64=35.1% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.577 issued rwts: total=97,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640336: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=3, BW=3357KiB/s (3437kB/s)(41.0MiB/12507msec) 00:16:17.577 slat (usec): min=631, max=2067.9k, avg=253431.02, stdev=660108.70 00:16:17.577 clat (msec): min=2116, max=12496, avg=9443.20, stdev=2989.36 00:16:17.577 lat (msec): min=4184, max=12506, avg=9696.63, stdev=2786.20 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:16:17.577 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[10671], 00:16:17.577 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12550], 00:16:17.577 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:17.577 | 99.99th=[12550] 00:16:17.577 lat (msec) : >=2000=100.00% 00:16:17.577 cpu : usr=0.00%, sys=0.28%, ctx=72, majf=0, minf=10497 00:16:17.577 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:17.577 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640337: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=19, BW=19.4MiB/s (20.3MB/s)(205MiB/10567msec) 00:16:17.577 slat (usec): min=469, max=2106.6k, avg=49059.78, stdev=268648.86 00:16:17.577 clat (msec): min=508, max=10406, avg=4965.43, stdev=4045.52 00:16:17.577 lat (msec): min=572, max=10428, avg=5014.49, stdev=4051.85 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 575], 5.00th=[ 625], 10.00th=[ 701], 20.00th=[ 1045], 00:16:17.577 | 30.00th=[ 1301], 40.00th=[ 1703], 50.00th=[ 2072], 60.00th=[ 9060], 00:16:17.577 | 70.00th=[ 9329], 80.00th=[ 9597], 90.00th=[ 9597], 95.00th=[ 9731], 00:16:17.577 | 99.00th=[10268], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.577 | 99.99th=[10402] 00:16:17.577 bw ( KiB/s): min= 1969, max=86016, per=1.33%, avg=39916.25, stdev=42277.33, samples=4 00:16:17.577 iops : min= 1, max= 84, avg=38.75, stdev=41.56, samples=4 00:16:17.577 lat (msec) : 750=10.24%, 1000=8.29%, 2000=29.27%, >=2000=52.20% 00:16:17.577 cpu : usr=0.00%, sys=1.26%, ctx=344, majf=0, minf=32769 00:16:17.577 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.6%, >=64=69.3% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:16:17.577 issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job3: (groupid=0, jobs=1): err= 0: pid=1640339: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=30, BW=30.2MiB/s (31.7MB/s)(319MiB/10567msec) 00:16:17.577 slat (usec): min=72, max=2115.9k, avg=32863.90, stdev=232938.87 00:16:17.577 clat (msec): min=80, max=9473, avg=4024.54, stdev=4058.12 00:16:17.577 lat (msec): min=548, max=9477, avg=4057.40, stdev=4060.86 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 550], 5.00th=[ 558], 10.00th=[ 567], 20.00th=[ 592], 00:16:17.577 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 793], 60.00th=[ 4329], 00:16:17.577 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9329], 00:16:17.577 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:17.577 | 99.99th=[ 9463] 00:16:17.577 bw ( KiB/s): min= 2048, max=192512, per=1.87%, avg=56173.71, stdev=71590.78, samples=7 00:16:17.577 iops : min= 2, max= 188, avg=54.86, stdev=69.91, samples=7 00:16:17.577 lat (msec) : 100=0.31%, 750=46.71%, 1000=9.72%, >=2000=43.26% 00:16:17.577 cpu : usr=0.00%, sys=1.37%, ctx=338, majf=0, minf=32769 00:16:17.577 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:16:17.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.577 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:17.577 issued rwts: total=319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.577 job4: (groupid=0, jobs=1): err= 0: pid=1640356: Wed Nov 20 11:39:18 2024 00:16:17.577 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(467MiB/10415msec) 00:16:17.577 slat (usec): min=66, max=2071.2k, avg=22109.90, stdev=164771.66 00:16:17.577 clat (msec): min=88, max=4771, avg=1787.72, stdev=1524.85 00:16:17.577 lat (msec): min=423, max=4807, avg=1809.83, stdev=1531.75 00:16:17.577 clat percentiles (msec): 00:16:17.577 | 1.00th=[ 426], 5.00th=[ 451], 10.00th=[ 518], 20.00th=[ 659], 00:16:17.577 | 30.00th=[ 776], 40.00th=[ 869], 50.00th=[ 919], 60.00th=[ 953], 00:16:17.578 | 70.00th=[ 3071], 80.00th=[ 3842], 90.00th=[ 4245], 95.00th=[ 4463], 00:16:17.578 | 99.00th=[ 4732], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:16:17.578 | 99.99th=[ 4799] 00:16:17.578 bw ( KiB/s): min= 4096, max=286720, per=3.84%, avg=115712.00, stdev=96301.74, samples=6 00:16:17.578 iops : min= 4, max= 280, avg=113.00, stdev=94.04, samples=6 00:16:17.578 lat (msec) : 100=0.21%, 500=8.57%, 750=19.49%, 1000=38.12%, 2000=1.50% 00:16:17.578 lat (msec) : >=2000=32.12% 00:16:17.578 cpu : usr=0.00%, sys=1.22%, ctx=641, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:17.578 issued rwts: total=467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640357: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=14, BW=14.1MiB/s (14.8MB/s)(177MiB/12528msec) 00:16:17.578 slat (usec): min=104, max=2129.5k, avg=58840.93, stdev=318610.40 00:16:17.578 clat (msec): min=394, max=12152, avg=8692.49, stdev=4366.36 00:16:17.578 lat (msec): min=396, max=12153, avg=8751.34, stdev=4341.90 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 617], 20.00th=[ 3708], 00:16:17.578 | 30.00th=[ 6342], 40.00th=[10671], 50.00th=[11879], 60.00th=[11879], 00:16:17.578 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:16:17.578 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:17.578 | 99.99th=[12147] 00:16:17.578 bw ( KiB/s): min= 1503, max=36864, per=0.48%, avg=14550.71, stdev=12236.33, samples=7 00:16:17.578 iops : min= 1, max= 36, avg=14.14, stdev=12.03, samples=7 00:16:17.578 lat (msec) : 500=5.65%, 750=4.52%, 2000=6.78%, >=2000=83.05% 00:16:17.578 cpu : usr=0.01%, sys=0.66%, ctx=171, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.0%, 32=18.1%, >=64=64.4% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:16:17.578 issued rwts: total=177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640358: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=12, BW=13.0MiB/s (13.6MB/s)(136MiB/10485msec) 00:16:17.578 slat (usec): min=99, max=2129.3k, avg=76954.60, stdev=348958.77 00:16:17.578 clat (msec): min=17, max=10465, avg=8162.72, stdev=2915.35 00:16:17.578 lat (msec): min=1948, max=10466, avg=8239.68, stdev=2835.74 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 1955], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 6275], 00:16:17.578 | 30.00th=[ 8423], 40.00th=[ 8557], 50.00th=[ 9731], 60.00th=[ 9866], 00:16:17.578 | 70.00th=[10134], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:16:17.578 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:17.578 | 99.99th=[10402] 00:16:17.578 bw ( KiB/s): min= 6144, max=10240, per=0.27%, avg=8192.00, stdev=2896.31, samples=2 00:16:17.578 iops : min= 6, max= 10, avg= 8.00, stdev= 2.83, samples=2 00:16:17.578 lat (msec) : 20=0.74%, 2000=2.21%, >=2000=97.06% 00:16:17.578 cpu : usr=0.00%, sys=1.18%, ctx=197, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.9%, 16=11.8%, 32=23.5%, >=64=53.7% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=90.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=10.0% 00:16:17.578 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640359: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(174MiB/10475msec) 00:16:17.578 slat (usec): min=535, max=2055.1k, avg=59606.46, stdev=300109.78 00:16:17.578 clat (msec): min=102, max=8483, avg=4796.29, stdev=1435.10 00:16:17.578 lat (msec): min=2157, max=8484, avg=4855.89, stdev=1426.22 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 2165], 5.00th=[ 3473], 10.00th=[ 3507], 20.00th=[ 3641], 00:16:17.578 | 30.00th=[ 3809], 40.00th=[ 3977], 50.00th=[ 4111], 60.00th=[ 5000], 00:16:17.578 | 70.00th=[ 6342], 80.00th=[ 6544], 90.00th=[ 6678], 95.00th=[ 6745], 00:16:17.578 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:16:17.578 | 99.99th=[ 8490] 00:16:17.578 bw ( KiB/s): min= 8192, max=75776, per=1.04%, avg=31402.67, stdev=38442.07, samples=3 00:16:17.578 iops : min= 8, max= 74, avg=30.67, stdev=37.54, samples=3 00:16:17.578 lat (msec) : 250=0.57%, >=2000=99.43% 00:16:17.578 cpu : usr=0.00%, sys=1.06%, ctx=250, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.4%, >=64=63.8% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:16:17.578 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640360: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=7, BW=7602KiB/s (7784kB/s)(78.0MiB/10507msec) 00:16:17.578 slat (usec): min=518, max=2086.7k, avg=133289.36, stdev=469939.90 00:16:17.578 clat (msec): min=109, max=10503, avg=6649.23, stdev=3263.04 00:16:17.578 lat (msec): min=2147, max=10506, avg=6782.52, stdev=3204.27 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 110], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 2265], 00:16:17.578 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6544], 60.00th=[ 8557], 00:16:17.578 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:16:17.578 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:17.578 | 99.99th=[10537] 00:16:17.578 lat (msec) : 250=1.28%, >=2000=98.72% 00:16:17.578 cpu : usr=0.00%, sys=0.63%, ctx=116, majf=0, minf=19969 00:16:17.578 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:17.578 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640361: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=85, BW=85.8MiB/s (90.0MB/s)(893MiB/10408msec) 00:16:17.578 slat (usec): min=46, max=2065.9k, avg=11194.32, stdev=118656.15 00:16:17.578 clat (msec): min=258, max=7586, avg=1425.74, stdev=2314.35 00:16:17.578 lat (msec): min=260, max=7589, avg=1436.94, stdev=2324.15 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 259], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:16:17.578 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 334], 60.00th=[ 414], 00:16:17.578 | 70.00th=[ 894], 80.00th=[ 1200], 90.00th=[ 6879], 95.00th=[ 7349], 00:16:17.578 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7617], 99.95th=[ 7617], 00:16:17.578 | 99.99th=[ 7617] 00:16:17.578 bw ( KiB/s): min= 4096, max=489472, per=5.21%, avg=156876.80, stdev=166859.15, samples=10 00:16:17.578 iops : min= 4, max= 478, avg=153.20, stdev=162.95, samples=10 00:16:17.578 lat (msec) : 500=63.27%, 750=4.82%, 1000=6.72%, 2000=9.18%, >=2000=16.01% 00:16:17.578 cpu : usr=0.05%, sys=1.60%, ctx=847, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.578 issued rwts: total=893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640362: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=48, BW=48.6MiB/s (51.0MB/s)(510MiB/10490msec) 00:16:17.578 slat (usec): min=44, max=2088.8k, avg=20359.44, stdev=159804.59 00:16:17.578 clat (msec): min=103, max=4842, avg=1838.86, stdev=1753.57 00:16:17.578 lat (msec): min=478, max=4843, avg=1859.22, stdev=1759.13 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 481], 5.00th=[ 481], 10.00th=[ 481], 20.00th=[ 498], 00:16:17.578 | 30.00th=[ 510], 40.00th=[ 518], 50.00th=[ 584], 60.00th=[ 852], 00:16:17.578 | 70.00th=[ 2937], 80.00th=[ 4530], 90.00th=[ 4665], 95.00th=[ 4732], 00:16:17.578 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:17.578 | 99.99th=[ 4866] 00:16:17.578 bw ( KiB/s): min=18432, max=260096, per=4.33%, avg=130389.33, stdev=105718.60, samples=6 00:16:17.578 iops : min= 18, max= 254, avg=127.33, stdev=103.24, samples=6 00:16:17.578 lat (msec) : 250=0.20%, 500=22.35%, 750=33.14%, 1000=5.29%, 2000=4.31% 00:16:17.578 lat (msec) : >=2000=34.71% 00:16:17.578 cpu : usr=0.02%, sys=1.30%, ctx=521, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:17.578 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.578 job4: (groupid=0, jobs=1): err= 0: pid=1640363: Wed Nov 20 11:39:18 2024 00:16:17.578 read: IOPS=33, BW=34.0MiB/s (35.6MB/s)(424MiB/12478msec) 00:16:17.578 slat (usec): min=46, max=2065.5k, avg=24449.33, stdev=191097.67 00:16:17.578 clat (msec): min=273, max=6936, avg=2145.26, stdev=2245.22 00:16:17.578 lat (msec): min=275, max=6951, avg=2169.71, stdev=2259.38 00:16:17.578 clat percentiles (msec): 00:16:17.578 | 1.00th=[ 275], 5.00th=[ 275], 10.00th=[ 279], 20.00th=[ 279], 00:16:17.578 | 30.00th=[ 284], 40.00th=[ 401], 50.00th=[ 659], 60.00th=[ 978], 00:16:17.578 | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 5067], 95.00th=[ 5067], 00:16:17.578 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:16:17.578 | 99.99th=[ 6946] 00:16:17.578 bw ( KiB/s): min= 1630, max=446464, per=4.04%, avg=121517.60, stdev=188195.16, samples=5 00:16:17.578 iops : min= 1, max= 436, avg=118.40, stdev=183.88, samples=5 00:16:17.578 lat (msec) : 500=44.34%, 750=7.31%, 1000=9.20%, 2000=0.71%, >=2000=38.44% 00:16:17.578 cpu : usr=0.01%, sys=0.88%, ctx=496, majf=0, minf=32769 00:16:17.578 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.1% 00:16:17.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.578 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:17.578 issued rwts: total=424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job4: (groupid=0, jobs=1): err= 0: pid=1640364: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=12, BW=12.3MiB/s (12.9MB/s)(154MiB/12555msec) 00:16:17.579 slat (usec): min=144, max=2074.6k, avg=67799.17, stdev=317214.74 00:16:17.579 clat (msec): min=2112, max=10587, avg=7432.99, stdev=1326.80 00:16:17.579 lat (msec): min=4181, max=10710, avg=7500.79, stdev=1295.98 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 4178], 5.00th=[ 4933], 10.00th=[ 4933], 20.00th=[ 6409], 00:16:17.579 | 30.00th=[ 7617], 40.00th=[ 7752], 50.00th=[ 7819], 60.00th=[ 7953], 00:16:17.579 | 70.00th=[ 8087], 80.00th=[ 8154], 90.00th=[ 8356], 95.00th=[ 8490], 00:16:17.579 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:17.579 | 99.99th=[10537] 00:16:17.579 bw ( KiB/s): min= 1448, max=30720, per=0.36%, avg=10939.20, stdev=11578.30, samples=5 00:16:17.579 iops : min= 1, max= 30, avg=10.60, stdev=11.39, samples=5 00:16:17.579 lat (msec) : >=2000=100.00% 00:16:17.579 cpu : usr=0.01%, sys=1.02%, ctx=196, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.4%, 32=20.8%, >=64=59.1% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=96.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.6% 00:16:17.579 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job4: (groupid=0, jobs=1): err= 0: pid=1640365: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=33, BW=33.2MiB/s (34.9MB/s)(349MiB/10500msec) 00:16:17.579 slat (usec): min=71, max=2086.2k, avg=29776.33, stdev=217229.40 00:16:17.579 clat (msec): min=105, max=8996, avg=3589.37, stdev=3610.77 00:16:17.579 lat (msec): min=504, max=8998, avg=3619.15, stdev=3614.52 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 502], 5.00th=[ 518], 10.00th=[ 542], 20.00th=[ 558], 00:16:17.579 | 30.00th=[ 584], 40.00th=[ 684], 50.00th=[ 1011], 60.00th=[ 2702], 00:16:17.579 | 70.00th=[ 6879], 80.00th=[ 8792], 90.00th=[ 8792], 95.00th=[ 8926], 00:16:17.579 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:16:17.579 | 99.99th=[ 9060] 00:16:17.579 bw ( KiB/s): min= 2048, max=196608, per=2.15%, avg=64658.29, stdev=89776.69, samples=7 00:16:17.579 iops : min= 2, max= 192, avg=63.14, stdev=87.67, samples=7 00:16:17.579 lat (msec) : 250=0.29%, 750=41.83%, 1000=7.74%, 2000=5.44%, >=2000=44.70% 00:16:17.579 cpu : usr=0.00%, sys=1.30%, ctx=385, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:17.579 issued rwts: total=349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job4: (groupid=0, jobs=1): err= 0: pid=1640366: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=26, BW=26.2MiB/s (27.5MB/s)(273MiB/10423msec) 00:16:17.579 slat (usec): min=448, max=2066.2k, avg=37849.05, stdev=239111.53 00:16:17.579 clat (msec): min=89, max=7938, avg=3920.79, stdev=3421.41 00:16:17.579 lat (msec): min=486, max=7940, avg=3958.64, stdev=3415.10 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 485], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 518], 00:16:17.579 | 30.00th=[ 531], 40.00th=[ 584], 50.00th=[ 2265], 60.00th=[ 7483], 00:16:17.579 | 70.00th=[ 7617], 80.00th=[ 7752], 90.00th=[ 7819], 95.00th=[ 7886], 00:16:17.579 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:16:17.579 | 99.99th=[ 7953] 00:16:17.579 bw ( KiB/s): min= 6144, max=223232, per=1.64%, avg=49493.33, stdev=86641.94, samples=6 00:16:17.579 iops : min= 6, max= 218, avg=48.33, stdev=84.61, samples=6 00:16:17.579 lat (msec) : 100=0.37%, 500=5.86%, 750=39.19%, 2000=2.20%, >=2000=52.38% 00:16:17.579 cpu : usr=0.00%, sys=0.74%, ctx=543, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:17.579 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job4: (groupid=0, jobs=1): err= 0: pid=1640367: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=27, BW=27.1MiB/s (28.4MB/s)(283MiB/10441msec) 00:16:17.579 slat (usec): min=40, max=2027.7k, avg=36517.09, stdev=221278.72 00:16:17.579 clat (msec): min=104, max=5182, avg=2881.79, stdev=1790.21 00:16:17.579 lat (msec): min=939, max=5192, avg=2918.31, stdev=1785.12 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 936], 5.00th=[ 969], 10.00th=[ 978], 20.00th=[ 1020], 00:16:17.579 | 30.00th=[ 1133], 40.00th=[ 1183], 50.00th=[ 3071], 60.00th=[ 4396], 00:16:17.579 | 70.00th=[ 4665], 80.00th=[ 4799], 90.00th=[ 4933], 95.00th=[ 5067], 00:16:17.579 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:16:17.579 | 99.99th=[ 5201] 00:16:17.579 bw ( KiB/s): min= 4096, max=155648, per=2.11%, avg=63444.60, stdev=65706.55, samples=5 00:16:17.579 iops : min= 4, max= 152, avg=61.80, stdev=64.03, samples=5 00:16:17.579 lat (msec) : 250=0.35%, 1000=16.96%, 2000=29.33%, >=2000=53.36% 00:16:17.579 cpu : usr=0.01%, sys=1.05%, ctx=524, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:17.579 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job4: (groupid=0, jobs=1): err= 0: pid=1640368: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=51, BW=51.8MiB/s (54.3MB/s)(543MiB/10486msec) 00:16:17.579 slat (usec): min=43, max=2085.1k, avg=19141.89, stdev=165339.67 00:16:17.579 clat (msec): min=89, max=6989, avg=2048.87, stdev=2604.84 00:16:17.579 lat (msec): min=395, max=6992, avg=2068.02, stdev=2609.73 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 397], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 414], 00:16:17.579 | 30.00th=[ 439], 40.00th=[ 493], 50.00th=[ 550], 60.00th=[ 575], 00:16:17.579 | 70.00th=[ 768], 80.00th=[ 6678], 90.00th=[ 6812], 95.00th=[ 6879], 00:16:17.579 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:16:17.579 | 99.99th=[ 7013] 00:16:17.579 bw ( KiB/s): min= 6144, max=315392, per=4.03%, avg=121417.14, stdev=134793.64, samples=7 00:16:17.579 iops : min= 6, max= 308, avg=118.57, stdev=131.63, samples=7 00:16:17.579 lat (msec) : 100=0.18%, 500=41.07%, 750=28.18%, 1000=0.74%, >=2000=29.83% 00:16:17.579 cpu : usr=0.00%, sys=1.02%, ctx=794, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.579 issued rwts: total=543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job5: (groupid=0, jobs=1): err= 0: pid=1640376: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=19, BW=19.2MiB/s (20.1MB/s)(200MiB/10443msec) 00:16:17.579 slat (usec): min=102, max=2005.2k, avg=52092.24, stdev=284007.56 00:16:17.579 clat (msec): min=23, max=10344, avg=5898.30, stdev=1983.92 00:16:17.579 lat (msec): min=1945, max=10351, avg=5950.39, stdev=1961.09 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 1938], 5.00th=[ 2089], 10.00th=[ 4010], 20.00th=[ 4077], 00:16:17.579 | 30.00th=[ 4245], 40.00th=[ 6007], 50.00th=[ 6007], 60.00th=[ 6074], 00:16:17.579 | 70.00th=[ 6409], 80.00th=[ 7953], 90.00th=[ 8557], 95.00th=[ 8557], 00:16:17.579 | 99.00th=[ 9866], 99.50th=[10268], 99.90th=[10402], 99.95th=[10402], 00:16:17.579 | 99.99th=[10402] 00:16:17.579 bw ( KiB/s): min=12288, max=102400, per=1.22%, avg=36864.00, stdev=43861.00, samples=4 00:16:17.579 iops : min= 12, max= 100, avg=36.00, stdev=42.83, samples=4 00:16:17.579 lat (msec) : 50=0.50%, 2000=2.00%, >=2000=97.50% 00:16:17.579 cpu : usr=0.00%, sys=1.24%, ctx=146, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.0%, >=64=68.5% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:16:17.579 issued rwts: total=200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job5: (groupid=0, jobs=1): err= 0: pid=1640377: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=101, BW=102MiB/s (106MB/s)(1023MiB/10076msec) 00:16:17.579 slat (usec): min=39, max=2058.4k, avg=9772.73, stdev=89017.26 00:16:17.579 clat (msec): min=74, max=5541, avg=1154.80, stdev=1524.65 00:16:17.579 lat (msec): min=76, max=5556, avg=1164.57, stdev=1530.61 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 124], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 142], 00:16:17.579 | 30.00th=[ 155], 40.00th=[ 255], 50.00th=[ 735], 60.00th=[ 1028], 00:16:17.579 | 70.00th=[ 1116], 80.00th=[ 1200], 90.00th=[ 4665], 95.00th=[ 5269], 00:16:17.579 | 99.00th=[ 5470], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:16:17.579 | 99.99th=[ 5537] 00:16:17.579 bw ( KiB/s): min=10240, max=546816, per=5.08%, avg=152901.58, stdev=163180.01, samples=12 00:16:17.579 iops : min= 10, max= 534, avg=149.25, stdev=159.38, samples=12 00:16:17.579 lat (msec) : 100=0.49%, 250=39.49%, 500=4.89%, 750=5.47%, 1000=8.21% 00:16:17.579 lat (msec) : 2000=27.47%, >=2000=13.98% 00:16:17.579 cpu : usr=0.02%, sys=1.40%, ctx=1311, majf=0, minf=32769 00:16:17.579 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:16:17.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.579 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.579 issued rwts: total=1023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.579 job5: (groupid=0, jobs=1): err= 0: pid=1640378: Wed Nov 20 11:39:18 2024 00:16:17.579 read: IOPS=98, BW=98.7MiB/s (103MB/s)(1032MiB/10456msec) 00:16:17.579 slat (usec): min=60, max=1994.5k, avg=10019.82, stdev=86718.40 00:16:17.579 clat (msec): min=110, max=3638, avg=1192.43, stdev=1097.92 00:16:17.579 lat (msec): min=223, max=3640, avg=1202.45, stdev=1101.25 00:16:17.579 clat percentiles (msec): 00:16:17.579 | 1.00th=[ 230], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 313], 00:16:17.579 | 30.00th=[ 338], 40.00th=[ 542], 50.00th=[ 751], 60.00th=[ 969], 00:16:17.579 | 70.00th=[ 1150], 80.00th=[ 2567], 90.00th=[ 3138], 95.00th=[ 3406], 00:16:17.579 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3641], 99.95th=[ 3641], 00:16:17.580 | 99.99th=[ 3641] 00:16:17.580 bw ( KiB/s): min=16384, max=444416, per=5.12%, avg=154282.67, stdev=129912.94, samples=12 00:16:17.580 iops : min= 16, max= 434, avg=150.67, stdev=126.87, samples=12 00:16:17.580 lat (msec) : 250=11.24%, 500=25.29%, 750=13.37%, 1000=11.53%, 2000=13.95% 00:16:17.580 lat (msec) : >=2000=24.61% 00:16:17.580 cpu : usr=0.04%, sys=1.60%, ctx=1699, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.580 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640379: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=136, BW=136MiB/s (143MB/s)(1424MiB/10459msec) 00:16:17.580 slat (usec): min=40, max=2028.2k, avg=7263.73, stdev=84107.38 00:16:17.580 clat (msec): min=112, max=5311, avg=799.33, stdev=1329.44 00:16:17.580 lat (msec): min=126, max=5349, avg=806.59, stdev=1334.91 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 136], 20.00th=[ 136], 00:16:17.580 | 30.00th=[ 138], 40.00th=[ 192], 50.00th=[ 305], 60.00th=[ 334], 00:16:17.580 | 70.00th=[ 372], 80.00th=[ 919], 90.00th=[ 2232], 95.00th=[ 4866], 00:16:17.580 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:16:17.580 | 99.99th=[ 5336] 00:16:17.580 bw ( KiB/s): min=16384, max=788480, per=8.82%, avg=265420.80, stdev=283897.75, samples=10 00:16:17.580 iops : min= 16, max= 770, avg=259.20, stdev=277.24, samples=10 00:16:17.580 lat (msec) : 250=43.96%, 500=28.30%, 750=2.53%, 1000=8.57%, 2000=5.55% 00:16:17.580 lat (msec) : >=2000=11.10% 00:16:17.580 cpu : usr=0.02%, sys=1.50%, ctx=2108, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.580 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640380: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(277MiB/10454msec) 00:16:17.580 slat (usec): min=434, max=2078.8k, avg=37330.97, stdev=239767.86 00:16:17.580 clat (msec): min=112, max=7890, avg=3882.73, stdev=3194.66 00:16:17.580 lat (msec): min=477, max=7891, avg=3920.06, stdev=3188.53 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 477], 5.00th=[ 523], 10.00th=[ 558], 20.00th=[ 642], 00:16:17.580 | 30.00th=[ 659], 40.00th=[ 693], 50.00th=[ 3708], 60.00th=[ 4396], 00:16:17.580 | 70.00th=[ 7617], 80.00th=[ 7684], 90.00th=[ 7819], 95.00th=[ 7819], 00:16:17.580 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:16:17.580 | 99.99th=[ 7886] 00:16:17.580 bw ( KiB/s): min= 4096, max=233472, per=2.03%, avg=61030.40, stdev=97030.69, samples=5 00:16:17.580 iops : min= 4, max= 228, avg=59.60, stdev=94.76, samples=5 00:16:17.580 lat (msec) : 250=0.36%, 500=3.61%, 750=38.27%, 2000=0.72%, >=2000=57.04% 00:16:17.580 cpu : usr=0.00%, sys=0.79%, ctx=486, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:17.580 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640381: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=228, BW=228MiB/s (239MB/s)(2285MiB/10016msec) 00:16:17.580 slat (usec): min=43, max=1717.6k, avg=4372.98, stdev=37050.29 00:16:17.580 clat (msec): min=13, max=2690, avg=426.27, stdev=272.56 00:16:17.580 lat (msec): min=15, max=2727, avg=430.64, stdev=278.52 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 35], 5.00th=[ 130], 10.00th=[ 163], 20.00th=[ 207], 00:16:17.580 | 30.00th=[ 230], 40.00th=[ 253], 50.00th=[ 284], 60.00th=[ 359], 00:16:17.580 | 70.00th=[ 592], 80.00th=[ 768], 90.00th=[ 860], 95.00th=[ 902], 00:16:17.580 | 99.00th=[ 953], 99.50th=[ 978], 99.90th=[ 986], 99.95th=[ 995], 00:16:17.580 | 99.99th=[ 2702] 00:16:17.580 bw ( KiB/s): min=122880, max=594778, per=9.78%, avg=294559.60, stdev=170904.22, samples=15 00:16:17.580 iops : min= 120, max= 580, avg=287.60, stdev=166.79, samples=15 00:16:17.580 lat (msec) : 20=0.22%, 50=1.18%, 100=0.66%, 250=35.14%, 500=27.13% 00:16:17.580 lat (msec) : 750=14.75%, 1000=20.88%, >=2000=0.04% 00:16:17.580 cpu : usr=0.07%, sys=2.25%, ctx=2639, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.580 issued rwts: total=2285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640383: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=70, BW=71.0MiB/s (74.4MB/s)(747MiB/10522msec) 00:16:17.580 slat (usec): min=40, max=2040.2k, avg=13943.32, stdev=135058.64 00:16:17.580 clat (msec): min=104, max=4551, avg=1507.98, stdev=1454.93 00:16:17.580 lat (msec): min=295, max=4553, avg=1521.92, stdev=1460.01 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 300], 5.00th=[ 338], 10.00th=[ 368], 20.00th=[ 393], 00:16:17.580 | 30.00th=[ 481], 40.00th=[ 527], 50.00th=[ 592], 60.00th=[ 609], 00:16:17.580 | 70.00th=[ 2299], 80.00th=[ 2534], 90.00th=[ 4329], 95.00th=[ 4463], 00:16:17.580 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:16:17.580 | 99.99th=[ 4530] 00:16:17.580 bw ( KiB/s): min=96256, max=350208, per=7.02%, avg=211285.33, stdev=90894.45, samples=6 00:16:17.580 iops : min= 94, max= 342, avg=206.33, stdev=88.76, samples=6 00:16:17.580 lat (msec) : 250=0.13%, 500=31.73%, 750=29.85%, >=2000=38.29% 00:16:17.580 cpu : usr=0.00%, sys=1.48%, ctx=874, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:17.580 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640384: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=138, BW=139MiB/s (145MB/s)(1462MiB/10548msec) 00:16:17.580 slat (usec): min=43, max=2078.8k, avg=7139.36, stdev=66275.44 00:16:17.580 clat (msec): min=104, max=3132, avg=852.68, stdev=831.81 00:16:17.580 lat (msec): min=250, max=3137, avg=859.82, stdev=835.03 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 264], 5.00th=[ 266], 10.00th=[ 266], 20.00th=[ 268], 00:16:17.580 | 30.00th=[ 271], 40.00th=[ 284], 50.00th=[ 489], 60.00th=[ 718], 00:16:17.580 | 70.00th=[ 877], 80.00th=[ 1020], 90.00th=[ 2433], 95.00th=[ 2869], 00:16:17.580 | 99.00th=[ 3071], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:16:17.580 | 99.99th=[ 3138] 00:16:17.580 bw ( KiB/s): min= 2048, max=485376, per=6.98%, avg=210175.85, stdev=169245.73, samples=13 00:16:17.580 iops : min= 2, max= 474, avg=205.15, stdev=165.36, samples=13 00:16:17.580 lat (msec) : 250=0.07%, 500=50.14%, 750=11.63%, 1000=16.69%, 2000=6.09% 00:16:17.580 lat (msec) : >=2000=15.39% 00:16:17.580 cpu : usr=0.04%, sys=1.99%, ctx=1705, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.580 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.580 job5: (groupid=0, jobs=1): err= 0: pid=1640385: Wed Nov 20 11:39:18 2024 00:16:17.580 read: IOPS=80, BW=80.6MiB/s (84.5MB/s)(848MiB/10526msec) 00:16:17.580 slat (usec): min=64, max=2008.9k, avg=12277.36, stdev=111366.56 00:16:17.580 clat (msec): min=110, max=3357, avg=1323.01, stdev=1088.92 00:16:17.580 lat (msec): min=273, max=3361, avg=1335.29, stdev=1091.83 00:16:17.580 clat percentiles (msec): 00:16:17.580 | 1.00th=[ 271], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 397], 00:16:17.580 | 30.00th=[ 435], 40.00th=[ 502], 50.00th=[ 802], 60.00th=[ 1116], 00:16:17.580 | 70.00th=[ 2333], 80.00th=[ 2735], 90.00th=[ 3004], 95.00th=[ 3205], 00:16:17.580 | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:16:17.580 | 99.99th=[ 3373] 00:16:17.580 bw ( KiB/s): min=20480, max=346112, per=4.90%, avg=147456.00, stdev=98417.71, samples=10 00:16:17.580 iops : min= 20, max= 338, avg=144.00, stdev=96.11, samples=10 00:16:17.580 lat (msec) : 250=0.12%, 500=39.86%, 750=8.37%, 1000=9.55%, 2000=8.61% 00:16:17.580 lat (msec) : >=2000=33.49% 00:16:17.580 cpu : usr=0.05%, sys=1.27%, ctx=1706, majf=0, minf=32769 00:16:17.580 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:16:17.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.580 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.581 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.581 job5: (groupid=0, jobs=1): err= 0: pid=1640386: Wed Nov 20 11:39:18 2024 00:16:17.581 read: IOPS=78, BW=78.8MiB/s (82.6MB/s)(822MiB/10436msec) 00:16:17.581 slat (usec): min=42, max=2064.8k, avg=12564.00, stdev=133963.41 00:16:17.581 clat (msec): min=104, max=6467, avg=749.55, stdev=932.19 00:16:17.581 lat (msec): min=173, max=6577, avg=762.11, stdev=955.17 00:16:17.581 clat percentiles (msec): 00:16:17.581 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 207], 20.00th=[ 245], 00:16:17.581 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 347], 00:16:17.581 | 70.00th=[ 510], 80.00th=[ 667], 90.00th=[ 2400], 95.00th=[ 2467], 00:16:17.581 | 99.00th=[ 2668], 99.50th=[ 4732], 99.90th=[ 6477], 99.95th=[ 6477], 00:16:17.581 | 99.99th=[ 6477] 00:16:17.581 bw ( KiB/s): min=194560, max=557056, per=11.80%, avg=355328.00, stdev=178966.77, samples=4 00:16:17.581 iops : min= 190, max= 544, avg=347.00, stdev=174.77, samples=4 00:16:17.581 lat (msec) : 250=20.92%, 500=47.81%, 750=11.68%, >=2000=19.59% 00:16:17.581 cpu : usr=0.00%, sys=1.41%, ctx=788, majf=0, minf=32769 00:16:17.581 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:16:17.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.581 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.581 issued rwts: total=822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.581 job5: (groupid=0, jobs=1): err= 0: pid=1640387: Wed Nov 20 11:39:18 2024 00:16:17.581 read: IOPS=178, BW=179MiB/s (188MB/s)(1854MiB/10364msec) 00:16:17.581 slat (usec): min=39, max=2147.9k, avg=5574.94, stdev=67572.91 00:16:17.581 clat (msec): min=23, max=4434, avg=678.65, stdev=1005.60 00:16:17.581 lat (msec): min=103, max=4434, avg=684.23, stdev=1008.72 00:16:17.581 clat percentiles (msec): 00:16:17.581 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 130], 20.00th=[ 131], 00:16:17.581 | 30.00th=[ 351], 40.00th=[ 401], 50.00th=[ 435], 60.00th=[ 502], 00:16:17.581 | 70.00th=[ 542], 80.00th=[ 634], 90.00th=[ 827], 95.00th=[ 4329], 00:16:17.581 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:16:17.581 | 99.99th=[ 4463] 00:16:17.581 bw ( KiB/s): min=14336, max=1001472, per=9.03%, avg=271911.38, stdev=241843.50, samples=13 00:16:17.581 iops : min= 14, max= 978, avg=265.54, stdev=236.18, samples=13 00:16:17.581 lat (msec) : 50=0.05%, 250=26.43%, 500=32.90%, 750=26.11%, 1000=7.28% 00:16:17.581 lat (msec) : 2000=0.16%, >=2000=7.07% 00:16:17.581 cpu : usr=0.04%, sys=1.86%, ctx=2317, majf=0, minf=32769 00:16:17.581 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:17.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.581 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.581 issued rwts: total=1854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.581 job5: (groupid=0, jobs=1): err= 0: pid=1640388: Wed Nov 20 11:39:18 2024 00:16:17.581 read: IOPS=98, BW=98.2MiB/s (103MB/s)(1031MiB/10498msec) 00:16:17.581 slat (usec): min=52, max=1938.1k, avg=10155.58, stdev=92780.76 00:16:17.581 clat (msec): min=23, max=4985, avg=1115.61, stdev=1294.64 00:16:17.581 lat (msec): min=124, max=5008, avg=1125.77, stdev=1299.63 00:16:17.581 clat percentiles (msec): 00:16:17.581 | 1.00th=[ 125], 5.00th=[ 127], 10.00th=[ 140], 20.00th=[ 194], 00:16:17.581 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 376], 60.00th=[ 852], 00:16:17.581 | 70.00th=[ 1552], 80.00th=[ 1653], 90.00th=[ 3507], 95.00th=[ 4279], 00:16:17.581 | 99.00th=[ 4866], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 5000], 00:16:17.581 | 99.99th=[ 5000] 00:16:17.581 bw ( KiB/s): min= 2048, max=503808, per=5.12%, avg=154049.58, stdev=168495.38, samples=12 00:16:17.581 iops : min= 2, max= 492, avg=150.42, stdev=164.51, samples=12 00:16:17.581 lat (msec) : 50=0.10%, 250=27.84%, 500=27.55%, 750=3.39%, 1000=1.94% 00:16:17.581 lat (msec) : 2000=22.21%, >=2000=16.97% 00:16:17.581 cpu : usr=0.02%, sys=1.74%, ctx=1447, majf=0, minf=32487 00:16:17.581 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:16:17.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.581 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.581 issued rwts: total=1031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.581 job5: (groupid=0, jobs=1): err= 0: pid=1640389: Wed Nov 20 11:39:18 2024 00:16:17.581 read: IOPS=265, BW=266MiB/s (279MB/s)(2661MiB/10016msec) 00:16:17.581 slat (usec): min=75, max=2001.2k, avg=3754.20, stdev=62710.17 00:16:17.581 clat (msec): min=14, max=6261, avg=261.76, stdev=567.86 00:16:17.581 lat (msec): min=16, max=6274, avg=265.52, stdev=579.71 00:16:17.581 clat percentiles (msec): 00:16:17.581 | 1.00th=[ 45], 5.00th=[ 133], 10.00th=[ 136], 20.00th=[ 136], 00:16:17.581 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 140], 60.00th=[ 176], 00:16:17.581 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 296], 00:16:17.581 | 99.00th=[ 4396], 99.50th=[ 4530], 99.90th=[ 6208], 99.95th=[ 6208], 00:16:17.581 | 99.99th=[ 6275] 00:16:17.581 bw ( KiB/s): min=309248, max=958464, per=21.55%, avg=648704.00, stdev=253214.19, samples=8 00:16:17.581 iops : min= 302, max= 936, avg=633.50, stdev=247.28, samples=8 00:16:17.581 lat (msec) : 20=0.15%, 50=1.05%, 100=1.69%, 250=61.74%, 500=33.26% 00:16:17.581 lat (msec) : >=2000=2.10% 00:16:17.581 cpu : usr=0.07%, sys=3.80%, ctx=2301, majf=0, minf=32769 00:16:17.581 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:16:17.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.581 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.581 00:16:17.581 Run status group 0 (all jobs): 00:16:17.581 READ: bw=2940MiB/s (3083MB/s), 2456KiB/s-266MiB/s (2515kB/s-279MB/s), io=36.3GiB (39.0GB), run=10015-12642msec 00:16:17.581 00:16:17.581 Disk stats (read/write): 00:16:17.581 nvme0n1: ios=34544/0, merge=0/0, ticks=7483467/0, in_queue=7483467, util=98.43% 00:16:17.581 nvme1n1: ios=35484/0, merge=0/0, ticks=7690138/0, in_queue=7690138, util=98.63% 00:16:17.581 nvme2n1: ios=45944/0, merge=0/0, ticks=7456212/0, in_queue=7456212, util=98.85% 00:16:17.581 nvme3n1: ios=19957/0, merge=0/0, ticks=7094424/0, in_queue=7094424, util=98.96% 00:16:17.581 nvme4n1: ios=35656/0, merge=0/0, ticks=8120413/0, in_queue=8120413, util=98.97% 00:16:17.581 nvme5n1: ios=125265/0, merge=0/0, ticks=9340420/0, in_queue=9340420, util=99.10% 00:16:17.581 11:39:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:16:17.581 11:39:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:17.581 11:39:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:17.581 11:39:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:17.581 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:17.581 11:39:20 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:18.147 11:39:21 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:19.082 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:19.083 11:39:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:20.020 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:20.020 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:20.020 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:20.020 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:20.020 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:20.279 11:39:23 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:21.217 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:21.217 11:39:24 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:22.154 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.154 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@99 -- # sync 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # set +e 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:16:22.155 rmmod nvme_rdma 00:16:22.155 rmmod nvme_fabrics 00:16:22.155 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:22.414 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # set -e 00:16:22.414 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # return 0 00:16:22.414 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # '[' -n 1638953 ']' 00:16:22.414 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@337 -- # killprocess 1638953 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 1638953 ']' 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 1638953 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1638953 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1638953' 00:16:22.415 killing process with pid 1638953 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 1638953 00:16:22.415 11:39:25 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 1638953 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # nvmf_fini 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@264 -- # local dev 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@130 -- # return 0 00:16:22.674 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@41 -- # _dev=0 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@41 -- # dev_map=() 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/setup.sh@284 -- # iptr 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@542 -- # iptables-save 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@542 -- # iptables-restore 00:16:22.675 00:16:22.675 real 0m34.275s 00:16:22.675 user 1m51.367s 00:16:22.675 sys 0m17.693s 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:22.675 ************************************ 00:16:22.675 END TEST nvmf_srq_overwhelm 00:16:22.675 ************************************ 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.675 11:39:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.675 ************************************ 00:16:22.675 START TEST nvmf_shutdown 00:16:22.675 ************************************ 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:22.935 * Looking for test storage... 00:16:22.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.935 --rc genhtml_branch_coverage=1 00:16:22.935 --rc genhtml_function_coverage=1 00:16:22.935 --rc genhtml_legend=1 00:16:22.935 --rc geninfo_all_blocks=1 00:16:22.935 --rc geninfo_unexecuted_blocks=1 00:16:22.935 00:16:22.935 ' 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.935 --rc genhtml_branch_coverage=1 00:16:22.935 --rc genhtml_function_coverage=1 00:16:22.935 --rc genhtml_legend=1 00:16:22.935 --rc geninfo_all_blocks=1 00:16:22.935 --rc geninfo_unexecuted_blocks=1 00:16:22.935 00:16:22.935 ' 00:16:22.935 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:22.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.935 --rc genhtml_branch_coverage=1 00:16:22.935 --rc genhtml_function_coverage=1 00:16:22.935 --rc genhtml_legend=1 00:16:22.935 --rc geninfo_all_blocks=1 00:16:22.935 --rc geninfo_unexecuted_blocks=1 00:16:22.935 00:16:22.935 ' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:22.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.936 --rc genhtml_branch_coverage=1 00:16:22.936 --rc genhtml_function_coverage=1 00:16:22.936 --rc genhtml_legend=1 00:16:22.936 --rc geninfo_all_blocks=1 00:16:22.936 --rc geninfo_unexecuted_blocks=1 00:16:22.936 00:16:22.936 ' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:22.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 ************************************ 00:16:22.936 START TEST nvmf_shutdown_tc1 00:16:22.936 ************************************ 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:22.936 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:23.196 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:23.196 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:23.196 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:23.196 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:16:23.196 11:39:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:29.773 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:29.773 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:29.773 Found net devices under 0000:18:00.0: mlx_0_0 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:29.773 Found net devices under 0000:18:00.1: mlx_0_1 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # get_rdma_if_list 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # rdma_devs=() 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@89 -- # continue 2 00:16:29.773 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@89 -- # continue 2 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@61 -- # uname 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_cm 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_core 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_umad 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe iw_cm 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@58 -- # key_initiator=target1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:16:29.774 10.0.0.1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:16:29.774 10.0.0.2 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:29.774 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:29.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:16:29.775 00:16:29.775 --- 10.0.0.2 ping statistics --- 00:16:29.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.775 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:29.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:16:29.775 00:16:29.775 --- 10.0.0.2 ping statistics --- 00:16:29.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.775 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:29.775 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=1646137 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 1646137 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1646137 ']' 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 [2024-11-20 11:39:32.708879] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:29.776 [2024-11-20 11:39:32.708933] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.776 [2024-11-20 11:39:32.784561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.776 [2024-11-20 11:39:32.828191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.776 [2024-11-20 11:39:32.828233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.776 [2024-11-20 11:39:32.828242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.776 [2024-11-20 11:39:32.828251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.776 [2024-11-20 11:39:32.828257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.776 [2024-11-20 11:39:32.829628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.776 [2024-11-20 11:39:32.829702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.776 [2024-11-20 11:39:32.829822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.776 [2024-11-20 11:39:32.829823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.776 11:39:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 [2024-11-20 11:39:33.010863] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc44520/0xc48a10) succeed. 00:16:29.776 [2024-11-20 11:39:33.020179] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc45bb0/0xc8a0b0) succeed. 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.776 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.777 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:29.777 Malloc1 00:16:30.036 [2024-11-20 11:39:33.262745] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:16:30.036 Malloc2 00:16:30.036 Malloc3 00:16:30.036 Malloc4 00:16:30.036 Malloc5 00:16:30.036 Malloc6 00:16:30.036 Malloc7 00:16:30.296 Malloc8 00:16:30.296 Malloc9 00:16:30.296 Malloc10 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1646366 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1646366 /var/tmp/bdevperf.sock 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1646366 ']' 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:30.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.296 { 00:16:30.296 "params": { 00:16:30.296 "name": "Nvme$subsystem", 00:16:30.296 "trtype": "$TEST_TRANSPORT", 00:16:30.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.296 "adrfam": "ipv4", 00:16:30.296 "trsvcid": "$NVMF_PORT", 00:16:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.296 "hdgst": ${hdgst:-false}, 00:16:30.296 "ddgst": ${ddgst:-false} 00:16:30.296 }, 00:16:30.296 "method": "bdev_nvme_attach_controller" 00:16:30.296 } 00:16:30.296 EOF 00:16:30.296 )") 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.296 { 00:16:30.296 "params": { 00:16:30.296 "name": "Nvme$subsystem", 00:16:30.296 "trtype": "$TEST_TRANSPORT", 00:16:30.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.296 "adrfam": "ipv4", 00:16:30.296 "trsvcid": "$NVMF_PORT", 00:16:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.296 "hdgst": ${hdgst:-false}, 00:16:30.296 "ddgst": ${ddgst:-false} 00:16:30.296 }, 00:16:30.296 "method": "bdev_nvme_attach_controller" 00:16:30.296 } 00:16:30.296 EOF 00:16:30.296 )") 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.296 { 00:16:30.296 "params": { 00:16:30.296 "name": "Nvme$subsystem", 00:16:30.296 "trtype": "$TEST_TRANSPORT", 00:16:30.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.296 "adrfam": "ipv4", 00:16:30.296 "trsvcid": "$NVMF_PORT", 00:16:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.296 "hdgst": ${hdgst:-false}, 00:16:30.296 "ddgst": ${ddgst:-false} 00:16:30.296 }, 00:16:30.296 "method": "bdev_nvme_attach_controller" 00:16:30.296 } 00:16:30.296 EOF 00:16:30.296 )") 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.296 { 00:16:30.296 "params": { 00:16:30.296 "name": "Nvme$subsystem", 00:16:30.296 "trtype": "$TEST_TRANSPORT", 00:16:30.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.296 "adrfam": "ipv4", 00:16:30.296 "trsvcid": "$NVMF_PORT", 00:16:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.296 "hdgst": ${hdgst:-false}, 00:16:30.296 "ddgst": ${ddgst:-false} 00:16:30.296 }, 00:16:30.296 "method": "bdev_nvme_attach_controller" 00:16:30.296 } 00:16:30.296 EOF 00:16:30.296 )") 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.296 { 00:16:30.296 "params": { 00:16:30.296 "name": "Nvme$subsystem", 00:16:30.296 "trtype": "$TEST_TRANSPORT", 00:16:30.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.296 "adrfam": "ipv4", 00:16:30.296 "trsvcid": "$NVMF_PORT", 00:16:30.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.296 "hdgst": ${hdgst:-false}, 00:16:30.296 "ddgst": ${ddgst:-false} 00:16:30.296 }, 00:16:30.296 "method": "bdev_nvme_attach_controller" 00:16:30.296 } 00:16:30.296 EOF 00:16:30.296 )") 00:16:30.296 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.297 { 00:16:30.297 "params": { 00:16:30.297 "name": "Nvme$subsystem", 00:16:30.297 "trtype": "$TEST_TRANSPORT", 00:16:30.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.297 "adrfam": "ipv4", 00:16:30.297 "trsvcid": "$NVMF_PORT", 00:16:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.297 "hdgst": ${hdgst:-false}, 00:16:30.297 "ddgst": ${ddgst:-false} 00:16:30.297 }, 00:16:30.297 "method": "bdev_nvme_attach_controller" 00:16:30.297 } 00:16:30.297 EOF 00:16:30.297 )") 00:16:30.297 [2024-11-20 11:39:33.754389] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:30.297 [2024-11-20 11:39:33.754453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.297 { 00:16:30.297 "params": { 00:16:30.297 "name": "Nvme$subsystem", 00:16:30.297 "trtype": "$TEST_TRANSPORT", 00:16:30.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.297 "adrfam": "ipv4", 00:16:30.297 "trsvcid": "$NVMF_PORT", 00:16:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.297 "hdgst": ${hdgst:-false}, 00:16:30.297 "ddgst": ${ddgst:-false} 00:16:30.297 }, 00:16:30.297 "method": "bdev_nvme_attach_controller" 00:16:30.297 } 00:16:30.297 EOF 00:16:30.297 )") 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.297 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.297 { 00:16:30.297 "params": { 00:16:30.297 "name": "Nvme$subsystem", 00:16:30.297 "trtype": "$TEST_TRANSPORT", 00:16:30.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.297 "adrfam": "ipv4", 00:16:30.297 "trsvcid": "$NVMF_PORT", 00:16:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.297 "hdgst": ${hdgst:-false}, 00:16:30.297 "ddgst": ${ddgst:-false} 00:16:30.297 }, 00:16:30.297 "method": "bdev_nvme_attach_controller" 00:16:30.297 } 00:16:30.297 EOF 00:16:30.297 )") 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.556 { 00:16:30.556 "params": { 00:16:30.556 "name": "Nvme$subsystem", 00:16:30.556 "trtype": "$TEST_TRANSPORT", 00:16:30.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.556 "adrfam": "ipv4", 00:16:30.556 "trsvcid": "$NVMF_PORT", 00:16:30.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.556 "hdgst": ${hdgst:-false}, 00:16:30.556 "ddgst": ${ddgst:-false} 00:16:30.556 }, 00:16:30.556 "method": "bdev_nvme_attach_controller" 00:16:30.556 } 00:16:30.556 EOF 00:16:30.556 )") 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:30.556 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:30.556 { 00:16:30.556 "params": { 00:16:30.556 "name": "Nvme$subsystem", 00:16:30.556 "trtype": "$TEST_TRANSPORT", 00:16:30.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.556 "adrfam": "ipv4", 00:16:30.556 "trsvcid": "$NVMF_PORT", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.557 "hdgst": ${hdgst:-false}, 00:16:30.557 "ddgst": ${ddgst:-false} 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 } 00:16:30.557 EOF 00:16:30.557 )") 00:16:30.557 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:30.557 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:16:30.557 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:16:30.557 11:39:33 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme1", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme2", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme3", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme4", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme5", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme6", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme7", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme8", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme9", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 },{ 00:16:30.557 "params": { 00:16:30.557 "name": "Nvme10", 00:16:30.557 "trtype": "rdma", 00:16:30.557 "traddr": "10.0.0.2", 00:16:30.557 "adrfam": "ipv4", 00:16:30.557 "trsvcid": "4420", 00:16:30.557 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:30.557 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:30.557 "hdgst": false, 00:16:30.557 "ddgst": false 00:16:30.557 }, 00:16:30.557 "method": "bdev_nvme_attach_controller" 00:16:30.557 }' 00:16:30.557 [2024-11-20 11:39:33.836698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.557 [2024-11-20 11:39:33.881921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1646366 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:16:31.494 11:39:34 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:16:32.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1646366 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1646137 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.431 { 00:16:32.431 "params": { 00:16:32.431 "name": "Nvme$subsystem", 00:16:32.431 "trtype": "$TEST_TRANSPORT", 00:16:32.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.431 "adrfam": "ipv4", 00:16:32.431 "trsvcid": "$NVMF_PORT", 00:16:32.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.431 "hdgst": ${hdgst:-false}, 00:16:32.431 "ddgst": ${ddgst:-false} 00:16:32.431 }, 00:16:32.431 "method": "bdev_nvme_attach_controller" 00:16:32.431 } 00:16:32.431 EOF 00:16:32.431 )") 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.431 { 00:16:32.431 "params": { 00:16:32.431 "name": "Nvme$subsystem", 00:16:32.431 "trtype": "$TEST_TRANSPORT", 00:16:32.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.431 "adrfam": "ipv4", 00:16:32.431 "trsvcid": "$NVMF_PORT", 00:16:32.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.431 "hdgst": ${hdgst:-false}, 00:16:32.431 "ddgst": ${ddgst:-false} 00:16:32.431 }, 00:16:32.431 "method": "bdev_nvme_attach_controller" 00:16:32.431 } 00:16:32.431 EOF 00:16:32.431 )") 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.431 { 00:16:32.431 "params": { 00:16:32.431 "name": "Nvme$subsystem", 00:16:32.431 "trtype": "$TEST_TRANSPORT", 00:16:32.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.431 "adrfam": "ipv4", 00:16:32.431 "trsvcid": "$NVMF_PORT", 00:16:32.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.431 "hdgst": ${hdgst:-false}, 00:16:32.431 "ddgst": ${ddgst:-false} 00:16:32.431 }, 00:16:32.431 "method": "bdev_nvme_attach_controller" 00:16:32.431 } 00:16:32.431 EOF 00:16:32.431 )") 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.431 { 00:16:32.431 "params": { 00:16:32.431 "name": "Nvme$subsystem", 00:16:32.431 "trtype": "$TEST_TRANSPORT", 00:16:32.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.431 "adrfam": "ipv4", 00:16:32.431 "trsvcid": "$NVMF_PORT", 00:16:32.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.431 "hdgst": ${hdgst:-false}, 00:16:32.431 "ddgst": ${ddgst:-false} 00:16:32.431 }, 00:16:32.431 "method": "bdev_nvme_attach_controller" 00:16:32.431 } 00:16:32.431 EOF 00:16:32.431 )") 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.431 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.431 { 00:16:32.431 "params": { 00:16:32.431 "name": "Nvme$subsystem", 00:16:32.431 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.432 { 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme$subsystem", 00:16:32.432 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 [2024-11-20 11:39:35.795260] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:32.432 [2024-11-20 11:39:35.795325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646587 ] 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.432 { 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme$subsystem", 00:16:32.432 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.432 { 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme$subsystem", 00:16:32.432 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.432 { 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme$subsystem", 00:16:32.432 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:32.432 { 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme$subsystem", 00:16:32.432 "trtype": "$TEST_TRANSPORT", 00:16:32.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "$NVMF_PORT", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.432 "hdgst": ${hdgst:-false}, 00:16:32.432 "ddgst": ${ddgst:-false} 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 } 00:16:32.432 EOF 00:16:32.432 )") 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:16:32.432 11:39:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme1", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme2", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme3", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme4", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme5", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme6", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme7", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.432 "method": "bdev_nvme_attach_controller" 00:16:32.432 },{ 00:16:32.432 "params": { 00:16:32.432 "name": "Nvme8", 00:16:32.432 "trtype": "rdma", 00:16:32.432 "traddr": "10.0.0.2", 00:16:32.432 "adrfam": "ipv4", 00:16:32.432 "trsvcid": "4420", 00:16:32.432 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:32.432 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:32.432 "hdgst": false, 00:16:32.432 "ddgst": false 00:16:32.432 }, 00:16:32.433 "method": "bdev_nvme_attach_controller" 00:16:32.433 },{ 00:16:32.433 "params": { 00:16:32.433 "name": "Nvme9", 00:16:32.433 "trtype": "rdma", 00:16:32.433 "traddr": "10.0.0.2", 00:16:32.433 "adrfam": "ipv4", 00:16:32.433 "trsvcid": "4420", 00:16:32.433 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:32.433 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:32.433 "hdgst": false, 00:16:32.433 "ddgst": false 00:16:32.433 }, 00:16:32.433 "method": "bdev_nvme_attach_controller" 00:16:32.433 },{ 00:16:32.433 "params": { 00:16:32.433 "name": "Nvme10", 00:16:32.433 "trtype": "rdma", 00:16:32.433 "traddr": "10.0.0.2", 00:16:32.433 "adrfam": "ipv4", 00:16:32.433 "trsvcid": "4420", 00:16:32.433 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:32.433 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:32.433 "hdgst": false, 00:16:32.433 "ddgst": false 00:16:32.433 }, 00:16:32.433 "method": "bdev_nvme_attach_controller" 00:16:32.433 }' 00:16:32.433 [2024-11-20 11:39:35.882398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.692 [2024-11-20 11:39:35.928560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.629 Running I/O for 1 seconds... 00:16:34.566 3214.00 IOPS, 200.88 MiB/s 00:16:34.566 Latency(us) 00:16:34.566 [2024-11-20T10:39:38.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.566 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme1n1 : 1.18 343.46 21.47 0.00 0.00 181629.79 23365.01 198773.54 00:16:34.566 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme2n1 : 1.18 347.39 21.71 0.00 0.00 177214.86 5698.78 186920.07 00:16:34.566 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme3n1 : 1.18 361.41 22.59 0.00 0.00 168512.23 6069.20 175978.41 00:16:34.566 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme4n1 : 1.19 398.73 24.92 0.00 0.00 151692.61 7151.97 134035.37 00:16:34.566 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme5n1 : 1.19 386.66 24.17 0.00 0.00 154099.17 7921.31 127652.73 00:16:34.566 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme6n1 : 1.19 389.72 24.36 0.00 0.00 150875.70 7750.34 119446.48 00:16:34.566 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme7n1 : 1.19 381.07 23.82 0.00 0.00 152035.86 7522.39 108504.82 00:16:34.566 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme8n1 : 1.19 380.82 23.80 0.00 0.00 150000.87 7294.44 98930.87 00:16:34.566 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme9n1 : 1.19 377.79 23.61 0.00 0.00 149869.97 10542.75 113519.75 00:16:34.566 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.566 Verification LBA range: start 0x0 length 0x400 00:16:34.566 Nvme10n1 : 1.19 323.35 20.21 0.00 0.00 172409.99 11568.53 204244.37 00:16:34.566 [2024-11-20T10:39:38.046Z] =================================================================================================================== 00:16:34.566 [2024-11-20T10:39:38.046Z] Total : 3690.39 230.65 0.00 0.00 160135.17 5698.78 204244.37 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:34.826 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:16:34.826 rmmod nvme_rdma 00:16:34.826 rmmod nvme_fabrics 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 1646137 ']' 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 1646137 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1646137 ']' 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1646137 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1646137 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1646137' 00:16:35.086 killing process with pid 1646137 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1646137 00:16:35.086 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1646137 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@264 -- # local dev 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:35.656 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # return 0 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@284 -- # iptr 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-save 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-restore 00:16:35.657 00:16:35.657 real 0m12.485s 00:16:35.657 user 0m28.612s 00:16:35.657 sys 0m5.890s 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:35.657 ************************************ 00:16:35.657 END TEST nvmf_shutdown_tc1 00:16:35.657 ************************************ 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:35.657 ************************************ 00:16:35.657 START TEST nvmf_shutdown_tc2 00:16:35.657 ************************************ 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:16:35.657 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:35.658 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:35.658 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:35.658 11:39:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:35.658 Found net devices under 0000:18:00.0: mlx_0_0 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:35.658 Found net devices under 0000:18:00.1: mlx_0_1 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # get_rdma_if_list 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # rdma_devs=() 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@89 -- # continue 2 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@89 -- # continue 2 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@61 -- # uname 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_cm 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_core 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_umad 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe iw_cm 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:35.658 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@58 -- # key_initiator=target1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:16:35.659 10.0.0.1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:16:35.659 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:16:35.918 10.0.0.2 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:35.918 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:35.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:16:35.919 00:16:35.919 --- 10.0.0.2 ping statistics --- 00:16:35.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.919 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:35.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:16:35.919 00:16:35.919 --- 10.0.0.2 ping statistics --- 00:16:35.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.919 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:16:35.919 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1647235 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1647235 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1647235 ']' 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.920 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:35.920 [2024-11-20 11:39:39.369228] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:35.920 [2024-11-20 11:39:39.369294] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.185 [2024-11-20 11:39:39.446217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.186 [2024-11-20 11:39:39.496259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.186 [2024-11-20 11:39:39.496299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.186 [2024-11-20 11:39:39.496309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.186 [2024-11-20 11:39:39.496317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.186 [2024-11-20 11:39:39.496325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.186 [2024-11-20 11:39:39.497693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.186 [2024-11-20 11:39:39.497768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.186 [2024-11-20 11:39:39.497884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.186 [2024-11-20 11:39:39.497886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.186 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.515 [2024-11-20 11:39:39.676829] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x162c520/0x1630a10) succeed. 00:16:36.515 [2024-11-20 11:39:39.685978] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x162dbb0/0x16720b0) succeed. 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.515 11:39:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:36.515 Malloc1 00:16:36.515 [2024-11-20 11:39:39.939278] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:16:36.515 Malloc2 00:16:36.773 Malloc3 00:16:36.773 Malloc4 00:16:36.773 Malloc5 00:16:36.773 Malloc6 00:16:36.773 Malloc7 00:16:36.773 Malloc8 00:16:37.032 Malloc9 00:16:37.032 Malloc10 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1647466 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1647466 /var/tmp/bdevperf.sock 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1647466 ']' 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:37.032 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:37.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 [2024-11-20 11:39:40.448838] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:37.033 [2024-11-20 11:39:40.448901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647466 ] 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.033 "trsvcid": "$NVMF_PORT", 00:16:37.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.033 "hdgst": ${hdgst:-false}, 00:16:37.033 "ddgst": ${ddgst:-false} 00:16:37.033 }, 00:16:37.033 "method": "bdev_nvme_attach_controller" 00:16:37.033 } 00:16:37.033 EOF 00:16:37.033 )") 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.033 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.033 { 00:16:37.033 "params": { 00:16:37.033 "name": "Nvme$subsystem", 00:16:37.033 "trtype": "$TEST_TRANSPORT", 00:16:37.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.033 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "$NVMF_PORT", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.034 "hdgst": ${hdgst:-false}, 00:16:37.034 "ddgst": ${ddgst:-false} 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 } 00:16:37.034 EOF 00:16:37.034 )") 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.034 { 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme$subsystem", 00:16:37.034 "trtype": "$TEST_TRANSPORT", 00:16:37.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "$NVMF_PORT", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.034 "hdgst": ${hdgst:-false}, 00:16:37.034 "ddgst": ${ddgst:-false} 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 } 00:16:37.034 EOF 00:16:37.034 )") 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:37.034 { 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme$subsystem", 00:16:37.034 "trtype": "$TEST_TRANSPORT", 00:16:37.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "$NVMF_PORT", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.034 "hdgst": ${hdgst:-false}, 00:16:37.034 "ddgst": ${ddgst:-false} 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 } 00:16:37.034 EOF 00:16:37.034 )") 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:16:37.034 11:39:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme1", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme2", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme3", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme4", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme5", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme6", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme7", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme8", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme9", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 },{ 00:16:37.034 "params": { 00:16:37.034 "name": "Nvme10", 00:16:37.034 "trtype": "rdma", 00:16:37.034 "traddr": "10.0.0.2", 00:16:37.034 "adrfam": "ipv4", 00:16:37.034 "trsvcid": "4420", 00:16:37.034 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:37.034 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:37.034 "hdgst": false, 00:16:37.034 "ddgst": false 00:16:37.034 }, 00:16:37.034 "method": "bdev_nvme_attach_controller" 00:16:37.034 }' 00:16:37.293 [2024-11-20 11:39:40.531614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.293 [2024-11-20 11:39:40.577430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.229 Running I/O for 10 seconds... 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.229 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:38.488 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.488 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:16:38.488 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:16:38.488 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:16:38.746 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:16:38.747 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:38.747 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:38.747 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:38.747 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.747 11:39:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=155 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1647466 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1647466 ']' 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1647466 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647466 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647466' 00:16:38.747 killing process with pid 1647466 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1647466 00:16:38.747 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1647466 00:16:39.006 Received shutdown signal, test time was about 0.823397 seconds 00:16:39.006 00:16:39.006 Latency(us) 00:16:39.006 [2024-11-20T10:39:42.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.006 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme1n1 : 0.81 346.58 21.66 0.00 0.00 180696.80 8320.22 206979.78 00:16:39.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme2n1 : 0.81 355.87 22.24 0.00 0.00 172589.81 8548.17 197861.73 00:16:39.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme3n1 : 0.81 349.17 21.82 0.00 0.00 172206.84 8833.11 189655.49 00:16:39.006 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme4n1 : 0.81 394.18 24.64 0.00 0.00 149657.82 5527.82 131299.95 00:16:39.006 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme5n1 : 0.81 373.83 23.36 0.00 0.00 154760.75 9744.92 167772.16 00:16:39.006 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme6n1 : 0.81 392.94 24.56 0.00 0.00 143961.13 9972.87 115799.26 00:16:39.006 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme7n1 : 0.82 392.25 24.52 0.00 0.00 141638.17 10485.76 111240.24 00:16:39.006 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme8n1 : 0.82 391.58 24.47 0.00 0.00 138700.71 10941.66 103945.79 00:16:39.006 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme9n1 : 0.82 390.78 24.42 0.00 0.00 136551.74 11796.48 92548.23 00:16:39.006 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.006 Verification LBA range: start 0x0 length 0x400 00:16:39.006 Nvme10n1 : 0.82 311.16 19.45 0.00 0.00 167024.14 3105.84 211538.81 00:16:39.006 [2024-11-20T10:39:42.486Z] =================================================================================================================== 00:16:39.006 [2024-11-20T10:39:42.486Z] Total : 3698.33 231.15 0.00 0.00 154833.61 3105.84 211538.81 00:16:39.264 11:39:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1647235 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:16:40.199 rmmod nvme_rdma 00:16:40.199 rmmod nvme_fabrics 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 1647235 ']' 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 1647235 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1647235 ']' 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1647235 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.199 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647235 00:16:40.458 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:40.458 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:40.458 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647235' 00:16:40.458 killing process with pid 1647235 00:16:40.458 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1647235 00:16:40.458 11:39:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1647235 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@264 -- # local dev 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # return 0 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:16:40.717 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@284 -- # iptr 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-save 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:40.976 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-restore 00:16:40.976 00:16:40.976 real 0m5.229s 00:16:40.976 user 0m20.590s 00:16:40.977 sys 0m1.226s 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:40.977 ************************************ 00:16:40.977 END TEST nvmf_shutdown_tc2 00:16:40.977 ************************************ 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:40.977 ************************************ 00:16:40.977 START TEST nvmf_shutdown_tc3 00:16:40.977 ************************************ 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:40.977 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:40.977 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:40.977 Found net devices under 0000:18:00.0: mlx_0_0 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:40.977 Found net devices under 0000:18:00.1: mlx_0_1 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # get_rdma_if_list 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # rdma_devs=() 00:16:40.977 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@89 -- # continue 2 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@89 -- # continue 2 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@61 -- # uname 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_cm 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_core 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_umad 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe iw_cm 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@58 -- # key_initiator=target1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:16:40.978 10.0.0.1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:40.978 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:16:41.239 10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:41.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:16:41.239 00:16:41.239 --- 10.0.0.2 ping statistics --- 00:16:41.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.239 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:41.239 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:41.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.026 ms 00:16:41.240 00:16:41.240 --- 10.0.0.2 ping statistics --- 00:16:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.240 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=1648139 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 1648139 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1648139 ']' 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.240 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.241 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.241 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.241 [2024-11-20 11:39:44.668488] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:41.241 [2024-11-20 11:39:44.668547] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.500 [2024-11-20 11:39:44.747677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.500 [2024-11-20 11:39:44.794656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.500 [2024-11-20 11:39:44.794699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.500 [2024-11-20 11:39:44.794708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.500 [2024-11-20 11:39:44.794717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.500 [2024-11-20 11:39:44.794724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.500 [2024-11-20 11:39:44.796166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.500 [2024-11-20 11:39:44.796247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.500 [2024-11-20 11:39:44.796350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.500 [2024-11-20 11:39:44.796351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.500 11:39:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.759 [2024-11-20 11:39:44.978575] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d83520/0x1d87a10) succeed. 00:16:41.759 [2024-11-20 11:39:44.987998] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d84bb0/0x1dc90b0) succeed. 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.759 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:41.759 Malloc1 00:16:42.018 [2024-11-20 11:39:45.238953] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:16:42.018 Malloc2 00:16:42.018 Malloc3 00:16:42.018 Malloc4 00:16:42.018 Malloc5 00:16:42.018 Malloc6 00:16:42.277 Malloc7 00:16:42.277 Malloc8 00:16:42.277 Malloc9 00:16:42.277 Malloc10 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1648276 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1648276 /var/tmp/bdevperf.sock 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1648276 ']' 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.277 { 00:16:42.277 "params": { 00:16:42.277 "name": "Nvme$subsystem", 00:16:42.277 "trtype": "$TEST_TRANSPORT", 00:16:42.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.277 "adrfam": "ipv4", 00:16:42.277 "trsvcid": "$NVMF_PORT", 00:16:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.277 "hdgst": ${hdgst:-false}, 00:16:42.277 "ddgst": ${ddgst:-false} 00:16:42.277 }, 00:16:42.277 "method": "bdev_nvme_attach_controller" 00:16:42.277 } 00:16:42.277 EOF 00:16:42.277 )") 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.277 { 00:16:42.277 "params": { 00:16:42.277 "name": "Nvme$subsystem", 00:16:42.277 "trtype": "$TEST_TRANSPORT", 00:16:42.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.277 "adrfam": "ipv4", 00:16:42.277 "trsvcid": "$NVMF_PORT", 00:16:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.277 "hdgst": ${hdgst:-false}, 00:16:42.277 "ddgst": ${ddgst:-false} 00:16:42.277 }, 00:16:42.277 "method": "bdev_nvme_attach_controller" 00:16:42.277 } 00:16:42.277 EOF 00:16:42.277 )") 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.277 { 00:16:42.277 "params": { 00:16:42.277 "name": "Nvme$subsystem", 00:16:42.277 "trtype": "$TEST_TRANSPORT", 00:16:42.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.277 "adrfam": "ipv4", 00:16:42.277 "trsvcid": "$NVMF_PORT", 00:16:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.277 "hdgst": ${hdgst:-false}, 00:16:42.277 "ddgst": ${ddgst:-false} 00:16:42.277 }, 00:16:42.277 "method": "bdev_nvme_attach_controller" 00:16:42.277 } 00:16:42.277 EOF 00:16:42.277 )") 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.277 { 00:16:42.277 "params": { 00:16:42.277 "name": "Nvme$subsystem", 00:16:42.277 "trtype": "$TEST_TRANSPORT", 00:16:42.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.277 "adrfam": "ipv4", 00:16:42.277 "trsvcid": "$NVMF_PORT", 00:16:42.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.277 "hdgst": ${hdgst:-false}, 00:16:42.277 "ddgst": ${ddgst:-false} 00:16:42.277 }, 00:16:42.277 "method": "bdev_nvme_attach_controller" 00:16:42.277 } 00:16:42.277 EOF 00:16:42.277 )") 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.277 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.278 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.278 { 00:16:42.278 "params": { 00:16:42.278 "name": "Nvme$subsystem", 00:16:42.278 "trtype": "$TEST_TRANSPORT", 00:16:42.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.278 "adrfam": "ipv4", 00:16:42.278 "trsvcid": "$NVMF_PORT", 00:16:42.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.278 "hdgst": ${hdgst:-false}, 00:16:42.278 "ddgst": ${ddgst:-false} 00:16:42.278 }, 00:16:42.278 "method": "bdev_nvme_attach_controller" 00:16:42.278 } 00:16:42.278 EOF 00:16:42.278 )") 00:16:42.278 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.278 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.278 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.278 { 00:16:42.278 "params": { 00:16:42.278 "name": "Nvme$subsystem", 00:16:42.278 "trtype": "$TEST_TRANSPORT", 00:16:42.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.278 "adrfam": "ipv4", 00:16:42.278 "trsvcid": "$NVMF_PORT", 00:16:42.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.278 "hdgst": ${hdgst:-false}, 00:16:42.278 "ddgst": ${ddgst:-false} 00:16:42.278 }, 00:16:42.278 "method": "bdev_nvme_attach_controller" 00:16:42.278 } 00:16:42.278 EOF 00:16:42.278 )") 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.537 [2024-11-20 11:39:45.756126] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:42.537 [2024-11-20 11:39:45.756193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648276 ] 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.537 { 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme$subsystem", 00:16:42.537 "trtype": "$TEST_TRANSPORT", 00:16:42.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "$NVMF_PORT", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.537 "hdgst": ${hdgst:-false}, 00:16:42.537 "ddgst": ${ddgst:-false} 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 } 00:16:42.537 EOF 00:16:42.537 )") 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.537 { 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme$subsystem", 00:16:42.537 "trtype": "$TEST_TRANSPORT", 00:16:42.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "$NVMF_PORT", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.537 "hdgst": ${hdgst:-false}, 00:16:42.537 "ddgst": ${ddgst:-false} 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 } 00:16:42.537 EOF 00:16:42.537 )") 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.537 { 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme$subsystem", 00:16:42.537 "trtype": "$TEST_TRANSPORT", 00:16:42.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "$NVMF_PORT", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.537 "hdgst": ${hdgst:-false}, 00:16:42.537 "ddgst": ${ddgst:-false} 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 } 00:16:42.537 EOF 00:16:42.537 )") 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:16:42.537 { 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme$subsystem", 00:16:42.537 "trtype": "$TEST_TRANSPORT", 00:16:42.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "$NVMF_PORT", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.537 "hdgst": ${hdgst:-false}, 00:16:42.537 "ddgst": ${ddgst:-false} 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 } 00:16:42.537 EOF 00:16:42.537 )") 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:16:42.537 11:39:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme1", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme2", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme3", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme4", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme5", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme6", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme7", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme8", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme9", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.537 }, 00:16:42.537 "method": "bdev_nvme_attach_controller" 00:16:42.537 },{ 00:16:42.537 "params": { 00:16:42.537 "name": "Nvme10", 00:16:42.537 "trtype": "rdma", 00:16:42.537 "traddr": "10.0.0.2", 00:16:42.537 "adrfam": "ipv4", 00:16:42.537 "trsvcid": "4420", 00:16:42.537 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:42.537 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:42.537 "hdgst": false, 00:16:42.537 "ddgst": false 00:16:42.538 }, 00:16:42.538 "method": "bdev_nvme_attach_controller" 00:16:42.538 }' 00:16:42.538 [2024-11-20 11:39:45.840934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.538 [2024-11-20 11:39:45.888938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.473 Running I/O for 10 seconds... 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.473 11:39:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:43.731 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.731 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:16:43.731 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:16:43.731 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=155 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1648139 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1648139 ']' 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1648139 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.990 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648139 00:16:44.249 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:44.249 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:44.249 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648139' 00:16:44.249 killing process with pid 1648139 00:16:44.249 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1648139 00:16:44.249 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1648139 00:16:44.507 2591.00 IOPS, 161.94 MiB/s [2024-11-20T10:39:47.987Z] 11:39:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:16:45.449 [2024-11-20 11:39:48.559369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.559420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:16:45.449 [2024-11-20 11:39:48.559439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.559451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:16:45.449 [2024-11-20 11:39:48.559464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.559480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:16:45.449 [2024-11-20 11:39:48.559506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.559522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:16:45.449 [2024-11-20 11:39:48.561587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.449 [2024-11-20 11:39:48.561628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:45.449 [2024-11-20 11:39:48.561653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.561685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.449 [2024-11-20 11:39:48.561698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.449 [2024-11-20 11:39:48.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.561724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.561736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.561762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.561777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.564062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.564126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.564190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.564236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.564290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.564332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.564385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.564435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.564462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.564474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.566471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.566525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.566602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.566700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.566746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.566796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.566840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.566893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.569473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.569503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.569532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.569571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.569590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.569609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.569627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.569681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.569699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.572101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.572155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.572222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.572276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.572321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.572372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.572420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.572473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.572527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.575050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.575108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.575175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.575228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.575273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.575320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.575367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.575411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.575465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.578027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.578118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.578197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.578257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.578302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.578352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.578400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.578446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.578498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.580785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.580850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.580912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.580969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.581027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.581097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.581143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.581197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.581243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.581289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.583259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.450 [2024-11-20 11:39:48.583288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:45.450 [2024-11-20 11:39:48.583317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.450 [2024-11-20 11:39:48.583353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.450 [2024-11-20 11:39:48.583373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.451 [2024-11-20 11:39:48.583391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.583411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.451 [2024-11-20 11:39:48.583441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.583459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.451 [2024-11-20 11:39:48.583477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32649 cdw0:1 sqhd:1990 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.586176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:45.451 [2024-11-20 11:39:48.586238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:45.451 [2024-11-20 11:39:48.588540] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.591254] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.593703] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.596341] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.598581] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.601082] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.603458] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:45.451 [2024-11-20 11:39:48.603590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001857fc80 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001856fc00 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001855fb80 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001854fb00 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001853fa80 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001852fa00 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001851f980 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.603939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.603980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001850f900 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184ff880 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184ef800 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184df780 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184cf700 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184bf680 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184af600 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001849f580 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001848f500 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001847f480 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001846f400 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001845f380 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001844f300 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001843f280 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001842f200 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001841f180 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.451 [2024-11-20 11:39:48.604847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001840f100 len:0x10000 key:0x1c2000 00:16:45.451 [2024-11-20 11:39:48.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.604891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187f0000 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.604920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.604953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187dff80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.604971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.604997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187cff00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187bfe80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187afe00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001879fd80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001878fd00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001877fc80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001876fc00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001875fb80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001874fb00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001873fa80 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001872fa00 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001871f980 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001870f900 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186ff880 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186ef800 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186df780 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186cf700 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186bf680 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186af600 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.605994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001869f580 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001868f500 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001867f480 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001866f400 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001865f380 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001864f300 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001863f280 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001862f200 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001861f180 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001860f100 len:0x10000 key:0x1c1f00 00:16:45.452 [2024-11-20 11:39:48.606548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000189f0000 len:0x10000 key:0x1c1500 00:16:45.452 [2024-11-20 11:39:48.606613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001820f700 len:0x10000 key:0x1c1a00 00:16:45.452 [2024-11-20 11:39:48.606673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb9b000 len:0x10000 key:0x1c1c00 00:16:45.452 [2024-11-20 11:39:48.606717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbc000 len:0x10000 key:0x1c1c00 00:16:45.452 [2024-11-20 11:39:48.606783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.452 [2024-11-20 11:39:48.606808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb59000 len:0x10000 key:0x1c1c00 00:16:45.452 [2024-11-20 11:39:48.606843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.606868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb7a000 len:0x10000 key:0x1c1c00 00:16:45.453 [2024-11-20 11:39:48.606888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.606929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb17000 len:0x10000 key:0x1c1c00 00:16:45.453 [2024-11-20 11:39:48.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.606973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb38000 len:0x10000 key:0x1c1c00 00:16:45.453 [2024-11-20 11:39:48.607004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.607045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1f000 len:0x10000 key:0x1c1c00 00:16:45.453 [2024-11-20 11:39:48.607064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610262] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:45.453 [2024-11-20 11:39:48.610313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188cfd00 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188bfc80 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188afc00 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001889fb80 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001888fb00 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001887fa80 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001886fa00 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001885f980 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001884f900 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001883f880 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001882f800 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001881f780 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.610965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001880f700 len:0x10000 key:0x1c1500 00:16:45.453 [2024-11-20 11:39:48.610999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bf0000 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bdff80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bcff00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bbfe80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bafe00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b9fd80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b8fd00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b7fc80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b6fc00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b5fb80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b4fb00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b3fa80 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b2fa00 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b1f980 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b0f900 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aff880 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aef800 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.611966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018adf780 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.611986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.453 [2024-11-20 11:39:48.612012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018acf700 len:0x10000 key:0x1c2200 00:16:45.453 [2024-11-20 11:39:48.612056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018abf680 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aaf600 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a9f580 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a8f500 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a7f480 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a6f400 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a5f380 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a4f300 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a3f280 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a2f200 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a1f180 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a0f100 len:0x10000 key:0x1c2200 00:16:45.454 [2024-11-20 11:39:48.612701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018df0000 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.612762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ddff80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.612819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dcff00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.612863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dbfe80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.612920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.612946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dafe00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.612972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d9fd80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d8fd00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d7fc80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d6fc00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d5fb80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d4fb00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d3fa80 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d2fa00 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d1f980 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d0f900 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cff880 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cef800 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cdf780 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ccf700 len:0x10000 key:0x1c2300 00:16:45.454 [2024-11-20 11:39:48.613730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.613765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188dfd80 len:0x10000 key:0x1c1500 00:16:45.454 [2024-11-20 11:39:48.613788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.616704] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:45.454 [2024-11-20 11:39:48.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edfd80 len:0x10000 key:0x1c1b00 00:16:45.454 [2024-11-20 11:39:48.616769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfd00 len:0x10000 key:0x1c1b00 00:16:45.454 [2024-11-20 11:39:48.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.454 [2024-11-20 11:39:48.616856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfc80 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.616891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.616920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafc00 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.616939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.616982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fb80 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fb00 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fa80 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fa00 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5f980 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4f900 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3f880 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f800 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f780 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f700 len:0x10000 key:0x1c1b00 00:16:45.455 [2024-11-20 11:39:48.617512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cbf680 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018caf600 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c9f580 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c8f500 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c7f480 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c6f400 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c5f380 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c4f300 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c3f280 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.617964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c2f200 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.618024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c1f180 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c0f100 len:0x10000 key:0x1c2300 00:16:45.455 [2024-11-20 11:39:48.618136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191f0000 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191dff80 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191cff00 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191bfe80 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191afe00 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001919fd80 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001918fd00 len:0x10000 key:0x1c1d00 00:16:45.455 [2024-11-20 11:39:48.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.455 [2024-11-20 11:39:48.618509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001917fc80 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001916fc00 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001915fb80 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001914fb00 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001913fa80 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001912fa00 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001911f980 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001910f900 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ff880 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.618977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ef800 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.618997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190df780 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cf700 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bf680 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190af600 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909f580 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908f500 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907f480 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906f400 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905f380 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904f300 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903f280 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f200 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f180 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f100 len:0x10000 key:0x1c1d00 00:16:45.456 [2024-11-20 11:39:48.619709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193f0000 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.619763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193dff80 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.619803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193cff00 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193bfe80 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.619913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193afe00 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.619959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.619982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001939fd80 len:0x10000 key:0x1c2600 00:16:45.456 [2024-11-20 11:39:48.620017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.620050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eefe00 len:0x10000 key:0x1c1b00 00:16:45.456 [2024-11-20 11:39:48.620069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6896e000 sqhd:7250 p:0 m:0 dnr:0 00:16:45.456 [2024-11-20 11:39:48.638585] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638759] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638779] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638814] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638832] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638849] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638866] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638896] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638918] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638935] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.638951] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:16:45.456 [2024-11-20 11:39:48.645223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.645256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.646087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.646115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.646129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.646143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:16:45.457 task offset: 35840 on job bdev=Nvme1n1 fails 00:16:45.457 00:16:45.457 Latency(us) 00:16:45.457 [2024-11-20T10:39:48.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.457 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme1n1 ended in about 1.89 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme1n1 : 1.89 135.19 8.45 33.80 0.00 375465.00 35788.35 1050399.61 00:16:45.457 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme2n1 ended in about 1.89 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme2n1 : 1.89 135.13 8.45 33.78 0.00 372299.33 36700.16 1050399.61 00:16:45.457 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme3n1 ended in about 1.90 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme3n1 : 1.90 150.89 9.43 33.77 0.00 337603.01 6040.71 1050399.61 00:16:45.457 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme4n1 ended in about 1.90 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme4n1 : 1.90 147.13 9.20 33.75 0.00 341602.23 4986.43 1050399.61 00:16:45.457 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme5n1 ended in about 1.90 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme5n1 : 1.90 139.16 8.70 33.74 0.00 354291.92 15956.59 1050399.61 00:16:45.457 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme6n1 ended in about 1.90 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme6n1 : 1.90 143.31 8.96 33.72 0.00 342980.21 17666.23 1043105.17 00:16:45.457 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme7n1 ended in about 1.90 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme7n1 : 1.90 144.30 9.02 33.70 0.00 338334.36 22453.20 1043105.17 00:16:45.457 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme8n1 ended in about 1.86 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme8n1 : 1.86 137.52 8.59 34.38 0.00 348227.49 27012.23 1094166.26 00:16:45.457 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme9n1 ended in about 1.87 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme9n1 : 1.87 137.03 8.56 34.26 0.00 346573.96 48325.68 1086871.82 00:16:45.457 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:45.457 Job: Nvme10n1 ended in about 1.87 seconds with error 00:16:45.457 Verification LBA range: start 0x0 length 0x400 00:16:45.457 Nvme10n1 : 1.87 102.43 6.40 34.14 0.00 430736.25 48781.58 1079577.38 00:16:45.457 [2024-11-20T10:39:48.937Z] =================================================================================================================== 00:16:45.457 [2024-11-20T10:39:48.937Z] Total : 1372.09 85.76 339.04 0.00 356863.59 4986.43 1094166.26 00:16:45.457 [2024-11-20 11:39:48.671482] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:45.457 [2024-11-20 11:39:48.671526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.671548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.671581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.671598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:16:45.457 [2024-11-20 11:39:48.681999] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.682048] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.682074] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:16:45.457 [2024-11-20 11:39:48.682167] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.682191] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.682205] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5300 00:16:45.457 [2024-11-20 11:39:48.688356] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.688426] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.688471] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d9c80 00:16:45.457 [2024-11-20 11:39:48.688665] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.688709] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.688756] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d2900 00:16:45.457 [2024-11-20 11:39:48.688893] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.688941] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.688956] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c6340 00:16:45.457 [2024-11-20 11:39:48.689053] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.689093] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.689109] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c5040 00:16:45.457 [2024-11-20 11:39:48.690109] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.690148] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.690167] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf1c0 00:16:45.457 [2024-11-20 11:39:48.690275] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.690306] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.690325] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688e080 00:16:45.457 [2024-11-20 11:39:48.690418] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.690448] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.690475] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168a8500 00:16:45.457 [2024-11-20 11:39:48.690564] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:45.457 [2024-11-20 11:39:48.690589] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:45.457 [2024-11-20 11:39:48.690608] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689b1c0 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1648276 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1648276 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.715 11:39:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1648276 00:16:46.282 [2024-11-20 11:39:49.686484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.686548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.687936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.687984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.688045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.688062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.688077] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.688094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.688116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.688129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.688143] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.688156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.692951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.693001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.694378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.694399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.695653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.695693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.697350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.697370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.698653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.698695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.700202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.700226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.701647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.701688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.703246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:46.282 [2024-11-20 11:39:49.703288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:46.282 [2024-11-20 11:39:49.703315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703373] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703504] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703576] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703632] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703755] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703815] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703871] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:16:46.282 [2024-11-20 11:39:49.703899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:16:46.282 [2024-11-20 11:39:49.703912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:16:46.282 [2024-11-20 11:39:49.703925] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:16:46.282 [2024-11-20 11:39:49.703939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:16:46.541 rmmod nvme_rdma 00:16:46.541 rmmod nvme_fabrics 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 1648139 ']' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 1648139 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1648139 ']' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1648139 00:16:46.541 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1648139) - No such process 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1648139 is not found' 00:16:46.541 Process with pid 1648139 is not found 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@264 -- # local dev 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # return 0 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:46.541 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@284 -- # iptr 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-save 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-restore 00:16:46.542 00:16:46.542 real 0m5.691s 00:16:46.542 user 0m16.188s 00:16:46.542 sys 0m1.462s 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.542 11:39:49 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 ************************************ 00:16:46.542 END TEST nvmf_shutdown_tc3 00:16:46.542 ************************************ 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 ************************************ 00:16:46.802 START TEST nvmf_shutdown_tc4 00:16:46.802 ************************************ 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.802 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:46.803 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:46.803 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:46.803 Found net devices under 0000:18:00.0: mlx_0_0 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:46.803 Found net devices under 0000:18:00.1: mlx_0_1 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # get_rdma_if_list 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@75 -- # rdma_devs=() 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@89 -- # continue 2 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@89 -- # continue 2 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@61 -- # uname 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@65 -- # modprobe ib_cm 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_core 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_umad 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe iw_cm 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:16:46.803 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@58 -- # key_initiator=target1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:16:46.804 10.0.0.1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:16:46.804 10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:46.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:16:46.804 00:16:46.804 --- 10.0.0.2 ping statistics --- 00:16:46.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.804 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:46.804 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:16:47.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:16:47.064 00:16:47.064 --- 10.0.0.2 ping statistics --- 00:16:47.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.064 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:47.064 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=1649036 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 1649036 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1649036 ']' 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.065 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.065 [2024-11-20 11:39:50.426829] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:16:47.065 [2024-11-20 11:39:50.426881] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.065 [2024-11-20 11:39:50.506942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.323 [2024-11-20 11:39:50.558813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.323 [2024-11-20 11:39:50.558851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.323 [2024-11-20 11:39:50.558861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.323 [2024-11-20 11:39:50.558870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.323 [2024-11-20 11:39:50.558878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.323 [2024-11-20 11:39:50.560244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.323 [2024-11-20 11:39:50.560323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.323 [2024-11-20 11:39:50.560423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.323 [2024-11-20 11:39:50.560425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.323 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.324 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:47.324 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.324 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.324 [2024-11-20 11:39:50.736740] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb5e520/0xb62a10) succeed. 00:16:47.324 [2024-11-20 11:39:50.746028] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb5fbb0/0xba40b0) succeed. 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.582 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.583 11:39:50 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:47.583 Malloc1 00:16:47.583 [2024-11-20 11:39:50.996697] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:16:47.583 Malloc2 00:16:47.841 Malloc3 00:16:47.841 Malloc4 00:16:47.841 Malloc5 00:16:47.841 Malloc6 00:16:47.841 Malloc7 00:16:47.841 Malloc8 00:16:48.099 Malloc9 00:16:48.099 Malloc10 00:16:48.099 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.099 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:48.099 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.099 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:48.100 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1649240 00:16:48.100 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:16:48.100 11:39:51 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:16:48.100 [2024-11-20 11:39:51.556540] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1649036 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1649036 ']' 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1649036 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1649036 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1649036' 00:16:53.371 killing process with pid 1649036 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1649036 00:16:53.371 11:39:56 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1649036 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 starting I/O failed: -6 00:16:53.371 starting I/O failed: -6 00:16:53.371 starting I/O failed: -6 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 NVMe io qpair process completion error 00:16:53.371 NVMe io qpair process completion error 00:16:53.938 11:39:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.198 starting I/O failed: -6 00:16:54.198 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 starting I/O failed: -6 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.199 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 [2024-11-20 11:39:57.631326] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 starting I/O failed: -6 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.200 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 [2024-11-20 11:39:57.642064] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 starting I/O failed: -6 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.201 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 starting I/O failed: -6 00:16:54.202 [2024-11-20 11:39:57.653459] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.202 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 [2024-11-20 11:39:57.663470] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 starting I/O failed: -6 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.203 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 starting I/O failed: -6 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 starting I/O failed: -6 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 starting I/O failed: -6 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 starting I/O failed: -6 00:16:54.204 [2024-11-20 11:39:57.674197] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:16:54.204 Write completed with error (sct=0, sc=8) 00:16:54.204 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 starting I/O failed: -6 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.463 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 Write completed with error (sct=0, sc=8) 00:16:54.464 [2024-11-20 11:39:57.685435] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:16:54.464 NVMe io qpair process completion error 00:16:54.464 NVMe io qpair process completion error 00:16:54.464 NVMe io qpair process completion error 00:16:54.464 NVMe io qpair process completion error 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1649240 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1649240 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.722 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1649240 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.291 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 [2024-11-20 11:39:58.690101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.292 [2024-11-20 11:39:58.690170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 [2024-11-20 11:39:58.692131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.292 [2024-11-20 11:39:58.692177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 [2024-11-20 11:39:58.694505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.292 [2024-11-20 11:39:58.694548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 [2024-11-20 11:39:58.701833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.292 [2024-11-20 11:39:58.701880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 [2024-11-20 11:39:58.703971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.292 [2024-11-20 11:39:58.704013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.292 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 [2024-11-20 11:39:58.712545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.293 [2024-11-20 11:39:58.712591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 [2024-11-20 11:39:58.715456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 [2024-11-20 11:39:58.715502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 [2024-11-20 11:39:58.718106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 [2024-11-20 11:39:58.718160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.293 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 [2024-11-20 11:39:58.722695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.294 [2024-11-20 11:39:58.722739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 Write completed with error (sct=0, sc=8) 00:16:55.294 [2024-11-20 11:39:58.762909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:55.294 [2024-11-20 11:39:58.762944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:16:55.294 Initializing NVMe Controllers 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:16:55.294 Controller IO queue size 128, less than required. 00:16:55.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:16:55.294 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:16:55.294 Initialization complete. Launching workers. 00:16:55.294 ======================================================== 00:16:55.294 Latency(us) 00:16:55.294 Device Information : IOPS MiB/s Average min max 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1472.93 63.29 85992.00 116.97 1170316.36 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1479.98 63.59 85677.93 116.71 1161545.66 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1471.93 63.25 86267.09 120.84 1203387.44 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1495.08 64.24 99873.61 117.20 2177979.12 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1484.51 63.79 85263.48 114.09 1159955.98 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1492.23 64.12 100112.53 114.20 2179256.01 00:16:55.294 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1458.34 62.66 87187.14 117.92 1234593.25 00:16:55.295 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1469.24 63.13 86647.12 109.53 1192264.19 00:16:55.295 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1496.93 64.32 99813.21 114.83 2148553.91 00:16:55.295 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1512.37 64.98 98883.25 117.30 2112676.57 00:16:55.295 ======================================================== 00:16:55.295 Total : 14833.55 637.38 91626.19 109.53 2179256.01 00:16:55.295 00:16:55.295 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:16:55.555 rmmod nvme_rdma 00:16:55.555 rmmod nvme_fabrics 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 1649036 ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 1649036 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1649036 ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1649036 00:16:55.555 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1649036) - No such process 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1649036 is not found' 00:16:55.555 Process with pid 1649036 is not found 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@264 -- # local dev 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@267 -- # remove_target_ns 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # return 0 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@284 -- # iptr 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-save 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-restore 00:16:55.555 00:16:55.555 real 0m8.783s 00:16:55.555 user 0m32.532s 00:16:55.555 sys 0m1.381s 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:16:55.555 ************************************ 00:16:55.555 END TEST nvmf_shutdown_tc4 00:16:55.555 ************************************ 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:16:55.555 00:16:55.555 real 0m32.762s 00:16:55.555 user 1m38.159s 00:16:55.555 sys 0m10.341s 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:55.555 ************************************ 00:16:55.555 END TEST nvmf_shutdown 00:16:55.555 ************************************ 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.555 ************************************ 00:16:55.555 START TEST nvmf_nsid 00:16:55.555 ************************************ 00:16:55.555 11:39:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:16:55.816 * Looking for test storage... 00:16:55.816 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.816 --rc genhtml_branch_coverage=1 00:16:55.816 --rc genhtml_function_coverage=1 00:16:55.816 --rc genhtml_legend=1 00:16:55.816 --rc geninfo_all_blocks=1 00:16:55.816 --rc geninfo_unexecuted_blocks=1 00:16:55.816 00:16:55.816 ' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.816 --rc genhtml_branch_coverage=1 00:16:55.816 --rc genhtml_function_coverage=1 00:16:55.816 --rc genhtml_legend=1 00:16:55.816 --rc geninfo_all_blocks=1 00:16:55.816 --rc geninfo_unexecuted_blocks=1 00:16:55.816 00:16:55.816 ' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.816 --rc genhtml_branch_coverage=1 00:16:55.816 --rc genhtml_function_coverage=1 00:16:55.816 --rc genhtml_legend=1 00:16:55.816 --rc geninfo_all_blocks=1 00:16:55.816 --rc geninfo_unexecuted_blocks=1 00:16:55.816 00:16:55.816 ' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.816 --rc genhtml_branch_coverage=1 00:16:55.816 --rc genhtml_function_coverage=1 00:16:55.816 --rc genhtml_legend=1 00:16:55.816 --rc geninfo_all_blocks=1 00:16:55.816 --rc geninfo_unexecuted_blocks=1 00:16:55.816 00:16:55.816 ' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.816 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:55.817 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:16:55.817 11:39:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:02.386 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:02.386 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.386 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:02.387 Found net devices under 0000:18:00.0: mlx_0_0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:02.387 Found net devices under 0000:18:00.1: mlx_0_1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # get_rdma_if_list 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@75 -- # rdma_devs=() 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@89 -- # continue 2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@89 -- # continue 2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@61 -- # uname 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@65 -- # modprobe ib_cm 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_core 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_umad 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe iw_cm 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # key_initiator=target1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:17:02.387 10.0.0.1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:17:02.387 10.0.0.2 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:02.387 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:02.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:17:02.388 00:17:02.388 --- 10.0.0.2 ping statistics --- 00:17:02.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.388 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:02.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:17:02.388 00:17:02.388 --- 10.0.0.2 ping statistics --- 00:17:02.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.388 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:02.388 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=1653109 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 1653109 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1653109 ']' 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.389 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.389 [2024-11-20 11:40:05.740439] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:02.389 [2024-11-20 11:40:05.740498] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.389 [2024-11-20 11:40:05.820123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.648 [2024-11-20 11:40:05.868171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.648 [2024-11-20 11:40:05.868212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.648 [2024-11-20 11:40:05.868222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.648 [2024-11-20 11:40:05.868231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.648 [2024-11-20 11:40:05.868238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.648 [2024-11-20 11:40:05.868709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.648 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.648 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:02.648 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:02.648 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.648 11:40:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1653128 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.2 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=3d47b176-8c41-486c-8345-d8f851385d05 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=dde375e2-d11c-4963-b863-2d29a83e600e 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=ab97b816-a909-46fc-a167-50de3094dfc2 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.648 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:02.648 null0 00:17:02.648 null1 00:17:02.648 [2024-11-20 11:40:06.073017] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:02.648 [2024-11-20 11:40:06.073099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653128 ] 00:17:02.648 null2 00:17:02.648 [2024-11-20 11:40:06.102683] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13ba180/0x132e380) succeed. 00:17:02.648 [2024-11-20 11:40:06.112199] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13bb630/0x1299330) succeed. 00:17:02.907 [2024-11-20 11:40:06.150806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.907 [2024-11-20 11:40:06.163227] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:17:02.907 [2024-11-20 11:40:06.199306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1653128 /var/tmp/tgt2.sock 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1653128 ']' 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:02.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.907 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:03.165 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.165 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:03.165 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:03.423 [2024-11-20 11:40:06.783325] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1abf820/0x18e8f90) succeed. 00:17:03.423 [2024-11-20 11:40:06.793996] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18d8280/0x192a630) succeed. 00:17:03.423 [2024-11-20 11:40:06.836236] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:17:03.423 nvme0n1 nvme0n2 00:17:03.423 nvme1n1 00:17:03.423 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:03.423 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:03.423 11:40:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 10.0.0.2 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 3d47b176-8c41-486c-8345-d8f851385d05 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3d47b1768c41486c8345d8f851385d05 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3D47B1768C41486C8345D8F851385D05 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 3D47B1768C41486C8345D8F851385D05 == \3\D\4\7\B\1\7\6\8\C\4\1\4\8\6\C\8\3\4\5\D\8\F\8\5\1\3\8\5\D\0\5 ]] 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid dde375e2-d11c-4963-b863-2d29a83e600e 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dde375e2d11c4963b8632d29a83e600e 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DDE375E2D11C4963B8632D29A83E600E 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DDE375E2D11C4963B8632D29A83E600E == \D\D\E\3\7\5\E\2\D\1\1\C\4\9\6\3\B\8\6\3\2\D\2\9\A\8\3\E\6\0\0\E ]] 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid ab97b816-a909-46fc-a167-50de3094dfc2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ab97b816a90946fca16750de3094dfc2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AB97B816A90946FCA16750DE3094DFC2 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ AB97B816A90946FCA16750DE3094DFC2 == \A\B\9\7\B\8\1\6\A\9\0\9\4\6\F\C\A\1\6\7\5\0\D\E\3\0\9\4\D\F\C\2 ]] 00:17:08.690 11:40:11 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1653128 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1653128 ']' 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1653128 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653128 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653128' 00:17:12.940 killing process with pid 1653128 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1653128 00:17:12.940 11:40:15 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1653128 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:17:12.940 rmmod nvme_rdma 00:17:12.940 rmmod nvme_fabrics 00:17:12.940 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 1653109 ']' 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 1653109 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1653109 ']' 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1653109 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653109 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653109' 00:17:13.198 killing process with pid 1653109 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1653109 00:17:13.198 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1653109 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@264 -- # local dev 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # return 0 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@284 -- # iptr 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-save 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-restore 00:17:13.457 00:17:13.457 real 0m17.720s 00:17:13.457 user 0m23.150s 00:17:13.457 sys 0m6.017s 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:13.457 ************************************ 00:17:13.457 END TEST nvmf_nsid 00:17:13.457 ************************************ 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:13.457 00:17:13.457 real 7m36.539s 00:17:13.457 user 18m1.022s 00:17:13.457 sys 2m12.663s 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.457 11:40:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.457 ************************************ 00:17:13.457 END TEST nvmf_target_extra 00:17:13.458 ************************************ 00:17:13.458 11:40:16 nvmf_rdma -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:13.458 11:40:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.458 11:40:16 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.458 11:40:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:13.458 ************************************ 00:17:13.458 START TEST nvmf_host 00:17:13.458 ************************************ 00:17:13.458 11:40:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:13.718 * Looking for test storage... 00:17:13.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:17:13.718 11:40:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.718 11:40:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.718 11:40:16 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.718 --rc genhtml_branch_coverage=1 00:17:13.718 --rc genhtml_function_coverage=1 00:17:13.718 --rc genhtml_legend=1 00:17:13.718 --rc geninfo_all_blocks=1 00:17:13.718 --rc geninfo_unexecuted_blocks=1 00:17:13.718 00:17:13.718 ' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.718 --rc genhtml_branch_coverage=1 00:17:13.718 --rc genhtml_function_coverage=1 00:17:13.718 --rc genhtml_legend=1 00:17:13.718 --rc geninfo_all_blocks=1 00:17:13.718 --rc geninfo_unexecuted_blocks=1 00:17:13.718 00:17:13.718 ' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.718 --rc genhtml_branch_coverage=1 00:17:13.718 --rc genhtml_function_coverage=1 00:17:13.718 --rc genhtml_legend=1 00:17:13.718 --rc geninfo_all_blocks=1 00:17:13.718 --rc geninfo_unexecuted_blocks=1 00:17:13.718 00:17:13.718 ' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.718 --rc genhtml_branch_coverage=1 00:17:13.718 --rc genhtml_function_coverage=1 00:17:13.718 --rc genhtml_legend=1 00:17:13.718 --rc geninfo_all_blocks=1 00:17:13.718 --rc geninfo_unexecuted_blocks=1 00:17:13.718 00:17:13.718 ' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.718 11:40:17 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:13.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.719 ************************************ 00:17:13.719 START TEST nvmf_aer 00:17:13.719 ************************************ 00:17:13.719 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:13.978 * Looking for test storage... 00:17:13.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.978 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:13.979 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:17:13.979 11:40:17 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:20.548 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:20.548 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:20.548 Found net devices under 0000:18:00.0: mlx_0_0 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:20.548 Found net devices under 0000:18:00.1: mlx_0_1 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:20.548 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # get_rdma_if_list 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # rdma_devs=() 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@89 -- # continue 2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@89 -- # continue 2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@61 -- # uname 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_cm 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_core 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_umad 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe iw_cm 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # key_initiator=target1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:17:20.549 10.0.0.1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:17:20.549 10.0.0.2 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:20.549 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:20.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:17:20.550 00:17:20.550 --- 10.0.0.2 ping statistics --- 00:17:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.550 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:20.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:17:20.550 00:17:20.550 --- 10.0.0.2 ping statistics --- 00:17:20.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.550 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:17:20.550 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=1657564 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 1657564 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1657564 ']' 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 [2024-11-20 11:40:23.676054] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:20.551 [2024-11-20 11:40:23.676119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.551 [2024-11-20 11:40:23.756377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.551 [2024-11-20 11:40:23.806743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.551 [2024-11-20 11:40:23.806787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.551 [2024-11-20 11:40:23.806797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.551 [2024-11-20 11:40:23.806806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.551 [2024-11-20 11:40:23.806813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.551 [2024-11-20 11:40:23.808287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.551 [2024-11-20 11:40:23.808378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.551 [2024-11-20 11:40:23.808456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.551 [2024-11-20 11:40:23.808458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.551 11:40:23 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.551 [2024-11-20 11:40:23.993841] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd5a220/0xd5e710) succeed. 00:17:20.551 [2024-11-20 11:40:24.003069] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd5b8b0/0xd9fdb0) succeed. 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.811 Malloc0 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.811 [2024-11-20 11:40:24.200835] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:20.811 [ 00:17:20.811 { 00:17:20.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:20.811 "subtype": "Discovery", 00:17:20.811 "listen_addresses": [], 00:17:20.811 "allow_any_host": true, 00:17:20.811 "hosts": [] 00:17:20.811 }, 00:17:20.811 { 00:17:20.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.811 "subtype": "NVMe", 00:17:20.811 "listen_addresses": [ 00:17:20.811 { 00:17:20.811 "trtype": "RDMA", 00:17:20.811 "adrfam": "IPv4", 00:17:20.811 "traddr": "10.0.0.2", 00:17:20.811 "trsvcid": "4420" 00:17:20.811 } 00:17:20.811 ], 00:17:20.811 "allow_any_host": true, 00:17:20.811 "hosts": [], 00:17:20.811 "serial_number": "SPDK00000000000001", 00:17:20.811 "model_number": "SPDK bdev Controller", 00:17:20.811 "max_namespaces": 2, 00:17:20.811 "min_cntlid": 1, 00:17:20.811 "max_cntlid": 65519, 00:17:20.811 "namespaces": [ 00:17:20.811 { 00:17:20.811 "nsid": 1, 00:17:20.811 "bdev_name": "Malloc0", 00:17:20.811 "name": "Malloc0", 00:17:20.811 "nguid": "6147DE23DCD94DB7A57930E85939A215", 00:17:20.811 "uuid": "6147de23-dcd9-4db7-a579-30e85939a215" 00:17:20.811 } 00:17:20.811 ] 00:17:20.811 } 00:17:20.811 ] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1657646 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:20.811 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:21.070 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.071 Malloc1 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.071 [ 00:17:21.071 { 00:17:21.071 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.071 "subtype": "Discovery", 00:17:21.071 "listen_addresses": [], 00:17:21.071 "allow_any_host": true, 00:17:21.071 "hosts": [] 00:17:21.071 }, 00:17:21.071 { 00:17:21.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.071 "subtype": "NVMe", 00:17:21.071 "listen_addresses": [ 00:17:21.071 { 00:17:21.071 "trtype": "RDMA", 00:17:21.071 "adrfam": "IPv4", 00:17:21.071 "traddr": "10.0.0.2", 00:17:21.071 "trsvcid": "4420" 00:17:21.071 } 00:17:21.071 ], 00:17:21.071 "allow_any_host": true, 00:17:21.071 "hosts": [], 00:17:21.071 "serial_number": "SPDK00000000000001", 00:17:21.071 "model_number": "SPDK bdev Controller", 00:17:21.071 "max_namespaces": 2, 00:17:21.071 "min_cntlid": 1, 00:17:21.071 "max_cntlid": 65519, 00:17:21.071 "namespaces": [ 00:17:21.071 { 00:17:21.071 "nsid": 1, 00:17:21.071 "bdev_name": "Malloc0", 00:17:21.071 "name": "Malloc0", 00:17:21.071 "nguid": "6147DE23DCD94DB7A57930E85939A215", 00:17:21.071 "uuid": "6147de23-dcd9-4db7-a579-30e85939a215" 00:17:21.071 }, 00:17:21.071 { 00:17:21.071 "nsid": 2, 00:17:21.071 "bdev_name": "Malloc1", 00:17:21.071 "name": "Malloc1", 00:17:21.071 "nguid": "7137310BCB3C4978BB588C13491C39EB", 00:17:21.071 "uuid": "7137310b-cb3c-4978-bb58-8c13491c39eb" 00:17:21.071 } 00:17:21.071 ] 00:17:21.071 } 00:17:21.071 ] 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1657646 00:17:21.071 Asynchronous Event Request test 00:17:21.071 Attaching to 10.0.0.2 00:17:21.071 Attached to 10.0.0.2 00:17:21.071 Registering asynchronous event callbacks... 00:17:21.071 Starting namespace attribute notice tests for all controllers... 00:17:21.071 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:21.071 aer_cb - Changed Namespace 00:17:21.071 Cleaning up... 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.071 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.330 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:17:21.331 rmmod nvme_rdma 00:17:21.331 rmmod nvme_fabrics 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 1657564 ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 1657564 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1657564 ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1657564 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657564 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657564' 00:17:21.331 killing process with pid 1657564 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1657564 00:17:21.331 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1657564 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@264 -- # local dev 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # return 0 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/setup.sh@284 -- # iptr 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-save 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-restore 00:17:21.590 00:17:21.590 real 0m7.881s 00:17:21.590 user 0m6.334s 00:17:21.590 sys 0m5.325s 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.590 11:40:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.590 ************************************ 00:17:21.590 END TEST nvmf_aer 00:17:21.590 ************************************ 00:17:21.590 11:40:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:21.590 11:40:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.590 11:40:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.590 11:40:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.850 ************************************ 00:17:21.850 START TEST nvmf_async_init 00:17:21.850 ************************************ 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:21.850 * Looking for test storage... 00:17:21.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.850 --rc genhtml_branch_coverage=1 00:17:21.850 --rc genhtml_function_coverage=1 00:17:21.850 --rc genhtml_legend=1 00:17:21.850 --rc geninfo_all_blocks=1 00:17:21.850 --rc geninfo_unexecuted_blocks=1 00:17:21.850 00:17:21.850 ' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.850 --rc genhtml_branch_coverage=1 00:17:21.850 --rc genhtml_function_coverage=1 00:17:21.850 --rc genhtml_legend=1 00:17:21.850 --rc geninfo_all_blocks=1 00:17:21.850 --rc geninfo_unexecuted_blocks=1 00:17:21.850 00:17:21.850 ' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.850 --rc genhtml_branch_coverage=1 00:17:21.850 --rc genhtml_function_coverage=1 00:17:21.850 --rc genhtml_legend=1 00:17:21.850 --rc geninfo_all_blocks=1 00:17:21.850 --rc geninfo_unexecuted_blocks=1 00:17:21.850 00:17:21.850 ' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.850 --rc genhtml_branch_coverage=1 00:17:21.850 --rc genhtml_function_coverage=1 00:17:21.850 --rc genhtml_legend=1 00:17:21.850 --rc geninfo_all_blocks=1 00:17:21.850 --rc geninfo_unexecuted_blocks=1 00:17:21.850 00:17:21.850 ' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.850 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:21.851 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8b5df65ebe7040708589938305688a43 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:17:21.851 11:40:25 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:28.417 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:28.417 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.417 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:28.417 Found net devices under 0000:18:00.0: mlx_0_0 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:28.418 Found net devices under 0000:18:00.1: mlx_0_1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # get_rdma_if_list 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # rdma_devs=() 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@89 -- # continue 2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@89 -- # continue 2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@61 -- # uname 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_cm 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_core 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_umad 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe iw_cm 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # key_initiator=target1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:17:28.418 10.0.0.1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:17:28.418 10.0.0.2 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:28.418 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:28.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:17:28.419 00:17:28.419 --- 10.0.0.2 ping statistics --- 00:17:28.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.419 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:28.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:17:28.419 00:17:28.419 --- 10.0.0.2 ping statistics --- 00:17:28.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.419 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:28.419 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=1660562 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 1660562 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1660562 ']' 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 [2024-11-20 11:40:31.472836] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:28.420 [2024-11-20 11:40:31.472905] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.420 [2024-11-20 11:40:31.550853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.420 [2024-11-20 11:40:31.597673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.420 [2024-11-20 11:40:31.597719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.420 [2024-11-20 11:40:31.597730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.420 [2024-11-20 11:40:31.597756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.420 [2024-11-20 11:40:31.597767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.420 [2024-11-20 11:40:31.598258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 [2024-11-20 11:40:31.772345] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x205f020/0x2063510) succeed. 00:17:28.420 [2024-11-20 11:40:31.781426] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20604d0/0x20a4bb0) succeed. 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 null0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8b5df65ebe7040708589938305688a43 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 [2024-11-20 11:40:31.849100] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.420 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.679 nvme0n1 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.679 [ 00:17:28.679 { 00:17:28.679 "name": "nvme0n1", 00:17:28.679 "aliases": [ 00:17:28.679 "8b5df65e-be70-4070-8589-938305688a43" 00:17:28.679 ], 00:17:28.679 "product_name": "NVMe disk", 00:17:28.679 "block_size": 512, 00:17:28.679 "num_blocks": 2097152, 00:17:28.679 "uuid": "8b5df65e-be70-4070-8589-938305688a43", 00:17:28.679 "numa_id": 0, 00:17:28.679 "assigned_rate_limits": { 00:17:28.679 "rw_ios_per_sec": 0, 00:17:28.679 "rw_mbytes_per_sec": 0, 00:17:28.679 "r_mbytes_per_sec": 0, 00:17:28.679 "w_mbytes_per_sec": 0 00:17:28.679 }, 00:17:28.679 "claimed": false, 00:17:28.679 "zoned": false, 00:17:28.679 "supported_io_types": { 00:17:28.679 "read": true, 00:17:28.679 "write": true, 00:17:28.679 "unmap": false, 00:17:28.679 "flush": true, 00:17:28.679 "reset": true, 00:17:28.679 "nvme_admin": true, 00:17:28.679 "nvme_io": true, 00:17:28.679 "nvme_io_md": false, 00:17:28.679 "write_zeroes": true, 00:17:28.679 "zcopy": false, 00:17:28.679 "get_zone_info": false, 00:17:28.679 "zone_management": false, 00:17:28.679 "zone_append": false, 00:17:28.679 "compare": true, 00:17:28.679 "compare_and_write": true, 00:17:28.679 "abort": true, 00:17:28.679 "seek_hole": false, 00:17:28.679 "seek_data": false, 00:17:28.679 "copy": true, 00:17:28.679 "nvme_iov_md": false 00:17:28.679 }, 00:17:28.679 "memory_domains": [ 00:17:28.679 { 00:17:28.679 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:28.679 "dma_device_type": 0 00:17:28.679 } 00:17:28.679 ], 00:17:28.679 "driver_specific": { 00:17:28.679 "nvme": [ 00:17:28.679 { 00:17:28.679 "trid": { 00:17:28.679 "trtype": "RDMA", 00:17:28.679 "adrfam": "IPv4", 00:17:28.679 "traddr": "10.0.0.2", 00:17:28.679 "trsvcid": "4420", 00:17:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:28.679 }, 00:17:28.679 "ctrlr_data": { 00:17:28.679 "cntlid": 1, 00:17:28.679 "vendor_id": "0x8086", 00:17:28.679 "model_number": "SPDK bdev Controller", 00:17:28.679 "serial_number": "00000000000000000000", 00:17:28.679 "firmware_revision": "25.01", 00:17:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:28.679 "oacs": { 00:17:28.679 "security": 0, 00:17:28.679 "format": 0, 00:17:28.679 "firmware": 0, 00:17:28.679 "ns_manage": 0 00:17:28.679 }, 00:17:28.679 "multi_ctrlr": true, 00:17:28.679 "ana_reporting": false 00:17:28.679 }, 00:17:28.679 "vs": { 00:17:28.679 "nvme_version": "1.3" 00:17:28.679 }, 00:17:28.679 "ns_data": { 00:17:28.679 "id": 1, 00:17:28.679 "can_share": true 00:17:28.679 } 00:17:28.679 } 00:17:28.679 ], 00:17:28.679 "mp_policy": "active_passive" 00:17:28.679 } 00:17:28.679 } 00:17:28.679 ] 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.679 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 [2024-11-20 11:40:31.954349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:28.680 [2024-11-20 11:40:31.972070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:28.680 [2024-11-20 11:40:31.993920] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:31 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 [ 00:17:28.680 { 00:17:28.680 "name": "nvme0n1", 00:17:28.680 "aliases": [ 00:17:28.680 "8b5df65e-be70-4070-8589-938305688a43" 00:17:28.680 ], 00:17:28.680 "product_name": "NVMe disk", 00:17:28.680 "block_size": 512, 00:17:28.680 "num_blocks": 2097152, 00:17:28.680 "uuid": "8b5df65e-be70-4070-8589-938305688a43", 00:17:28.680 "numa_id": 0, 00:17:28.680 "assigned_rate_limits": { 00:17:28.680 "rw_ios_per_sec": 0, 00:17:28.680 "rw_mbytes_per_sec": 0, 00:17:28.680 "r_mbytes_per_sec": 0, 00:17:28.680 "w_mbytes_per_sec": 0 00:17:28.680 }, 00:17:28.680 "claimed": false, 00:17:28.680 "zoned": false, 00:17:28.680 "supported_io_types": { 00:17:28.680 "read": true, 00:17:28.680 "write": true, 00:17:28.680 "unmap": false, 00:17:28.680 "flush": true, 00:17:28.680 "reset": true, 00:17:28.680 "nvme_admin": true, 00:17:28.680 "nvme_io": true, 00:17:28.680 "nvme_io_md": false, 00:17:28.680 "write_zeroes": true, 00:17:28.680 "zcopy": false, 00:17:28.680 "get_zone_info": false, 00:17:28.680 "zone_management": false, 00:17:28.680 "zone_append": false, 00:17:28.680 "compare": true, 00:17:28.680 "compare_and_write": true, 00:17:28.680 "abort": true, 00:17:28.680 "seek_hole": false, 00:17:28.680 "seek_data": false, 00:17:28.680 "copy": true, 00:17:28.680 "nvme_iov_md": false 00:17:28.680 }, 00:17:28.680 "memory_domains": [ 00:17:28.680 { 00:17:28.680 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:28.680 "dma_device_type": 0 00:17:28.680 } 00:17:28.680 ], 00:17:28.680 "driver_specific": { 00:17:28.680 "nvme": [ 00:17:28.680 { 00:17:28.680 "trid": { 00:17:28.680 "trtype": "RDMA", 00:17:28.680 "adrfam": "IPv4", 00:17:28.680 "traddr": "10.0.0.2", 00:17:28.680 "trsvcid": "4420", 00:17:28.680 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:28.680 }, 00:17:28.680 "ctrlr_data": { 00:17:28.680 "cntlid": 2, 00:17:28.680 "vendor_id": "0x8086", 00:17:28.680 "model_number": "SPDK bdev Controller", 00:17:28.680 "serial_number": "00000000000000000000", 00:17:28.680 "firmware_revision": "25.01", 00:17:28.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:28.680 "oacs": { 00:17:28.680 "security": 0, 00:17:28.680 "format": 0, 00:17:28.680 "firmware": 0, 00:17:28.680 "ns_manage": 0 00:17:28.680 }, 00:17:28.680 "multi_ctrlr": true, 00:17:28.680 "ana_reporting": false 00:17:28.680 }, 00:17:28.680 "vs": { 00:17:28.680 "nvme_version": "1.3" 00:17:28.680 }, 00:17:28.680 "ns_data": { 00:17:28.680 "id": 1, 00:17:28.680 "can_share": true 00:17:28.680 } 00:17:28.680 } 00:17:28.680 ], 00:17:28.680 "mp_policy": "active_passive" 00:17:28.680 } 00:17:28.680 } 00:17:28.680 ] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zlJY5rjQJG 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zlJY5rjQJG 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zlJY5rjQJG 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4421 --secure-channel 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 [2024-11-20 11:40:32.085404] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.680 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.680 [2024-11-20 11:40:32.101444] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.940 nvme0n1 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 [ 00:17:28.940 { 00:17:28.940 "name": "nvme0n1", 00:17:28.940 "aliases": [ 00:17:28.940 "8b5df65e-be70-4070-8589-938305688a43" 00:17:28.940 ], 00:17:28.940 "product_name": "NVMe disk", 00:17:28.940 "block_size": 512, 00:17:28.940 "num_blocks": 2097152, 00:17:28.940 "uuid": "8b5df65e-be70-4070-8589-938305688a43", 00:17:28.940 "numa_id": 0, 00:17:28.940 "assigned_rate_limits": { 00:17:28.940 "rw_ios_per_sec": 0, 00:17:28.940 "rw_mbytes_per_sec": 0, 00:17:28.940 "r_mbytes_per_sec": 0, 00:17:28.940 "w_mbytes_per_sec": 0 00:17:28.940 }, 00:17:28.940 "claimed": false, 00:17:28.940 "zoned": false, 00:17:28.940 "supported_io_types": { 00:17:28.940 "read": true, 00:17:28.940 "write": true, 00:17:28.940 "unmap": false, 00:17:28.940 "flush": true, 00:17:28.940 "reset": true, 00:17:28.940 "nvme_admin": true, 00:17:28.940 "nvme_io": true, 00:17:28.940 "nvme_io_md": false, 00:17:28.940 "write_zeroes": true, 00:17:28.940 "zcopy": false, 00:17:28.940 "get_zone_info": false, 00:17:28.940 "zone_management": false, 00:17:28.940 "zone_append": false, 00:17:28.940 "compare": true, 00:17:28.940 "compare_and_write": true, 00:17:28.940 "abort": true, 00:17:28.940 "seek_hole": false, 00:17:28.940 "seek_data": false, 00:17:28.940 "copy": true, 00:17:28.940 "nvme_iov_md": false 00:17:28.940 }, 00:17:28.940 "memory_domains": [ 00:17:28.940 { 00:17:28.940 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:28.940 "dma_device_type": 0 00:17:28.940 } 00:17:28.940 ], 00:17:28.940 "driver_specific": { 00:17:28.940 "nvme": [ 00:17:28.940 { 00:17:28.940 "trid": { 00:17:28.940 "trtype": "RDMA", 00:17:28.940 "adrfam": "IPv4", 00:17:28.940 "traddr": "10.0.0.2", 00:17:28.940 "trsvcid": "4421", 00:17:28.940 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:28.940 }, 00:17:28.940 "ctrlr_data": { 00:17:28.940 "cntlid": 3, 00:17:28.940 "vendor_id": "0x8086", 00:17:28.940 "model_number": "SPDK bdev Controller", 00:17:28.940 "serial_number": "00000000000000000000", 00:17:28.940 "firmware_revision": "25.01", 00:17:28.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:28.940 "oacs": { 00:17:28.940 "security": 0, 00:17:28.940 "format": 0, 00:17:28.940 "firmware": 0, 00:17:28.940 "ns_manage": 0 00:17:28.940 }, 00:17:28.940 "multi_ctrlr": true, 00:17:28.940 "ana_reporting": false 00:17:28.940 }, 00:17:28.940 "vs": { 00:17:28.940 "nvme_version": "1.3" 00:17:28.940 }, 00:17:28.940 "ns_data": { 00:17:28.940 "id": 1, 00:17:28.940 "can_share": true 00:17:28.940 } 00:17:28.940 } 00:17:28.940 ], 00:17:28.940 "mp_policy": "active_passive" 00:17:28.940 } 00:17:28.940 } 00:17:28.940 ] 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zlJY5rjQJG 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:17:28.940 rmmod nvme_rdma 00:17:28.940 rmmod nvme_fabrics 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 1660562 ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 1660562 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1660562 ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1660562 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660562 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660562' 00:17:28.940 killing process with pid 1660562 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1660562 00:17:28.940 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1660562 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@264 -- # local dev 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # return 0 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/setup.sh@284 -- # iptr 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-save 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-restore 00:17:29.200 00:17:29.200 real 0m7.476s 00:17:29.200 user 0m2.962s 00:17:29.200 sys 0m5.044s 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:29.200 ************************************ 00:17:29.200 END TEST nvmf_async_init 00:17:29.200 ************************************ 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.200 ************************************ 00:17:29.200 START TEST nvmf_identify 00:17:29.200 ************************************ 00:17:29.200 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:29.460 * Looking for test storage... 00:17:29.460 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.460 --rc genhtml_branch_coverage=1 00:17:29.460 --rc genhtml_function_coverage=1 00:17:29.460 --rc genhtml_legend=1 00:17:29.460 --rc geninfo_all_blocks=1 00:17:29.460 --rc geninfo_unexecuted_blocks=1 00:17:29.460 00:17:29.460 ' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.460 --rc genhtml_branch_coverage=1 00:17:29.460 --rc genhtml_function_coverage=1 00:17:29.460 --rc genhtml_legend=1 00:17:29.460 --rc geninfo_all_blocks=1 00:17:29.460 --rc geninfo_unexecuted_blocks=1 00:17:29.460 00:17:29.460 ' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.460 --rc genhtml_branch_coverage=1 00:17:29.460 --rc genhtml_function_coverage=1 00:17:29.460 --rc genhtml_legend=1 00:17:29.460 --rc geninfo_all_blocks=1 00:17:29.460 --rc geninfo_unexecuted_blocks=1 00:17:29.460 00:17:29.460 ' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.460 --rc genhtml_branch_coverage=1 00:17:29.460 --rc genhtml_function_coverage=1 00:17:29.460 --rc genhtml_legend=1 00:17:29.460 --rc geninfo_all_blocks=1 00:17:29.460 --rc geninfo_unexecuted_blocks=1 00:17:29.460 00:17:29.460 ' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.460 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:29.461 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:17:29.461 11:40:32 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:36.029 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:36.030 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:36.030 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:36.030 Found net devices under 0000:18:00.0: mlx_0_0 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:36.030 Found net devices under 0000:18:00.1: mlx_0_1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # get_rdma_if_list 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # rdma_devs=() 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@89 -- # continue 2 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@89 -- # continue 2 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@61 -- # uname 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_cm 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_core 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_umad 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe iw_cm 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # key_initiator=target1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:17:36.030 10.0.0.1 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:36.030 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:17:36.031 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:36.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.016 ms 00:17:36.031 00:17:36.031 --- 10.0.0.2 ping statistics --- 00:17:36.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.031 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:36.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:17:36.031 00:17:36.031 --- 10.0.0.2 ping statistics --- 00:17:36.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.031 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:36.031 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1663553 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1663553 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1663553 ']' 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.032 11:40:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 [2024-11-20 11:40:38.958806] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:36.032 [2024-11-20 11:40:38.958859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.032 [2024-11-20 11:40:39.037326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.032 [2024-11-20 11:40:39.088670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.032 [2024-11-20 11:40:39.088706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.032 [2024-11-20 11:40:39.088717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.032 [2024-11-20 11:40:39.088726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.032 [2024-11-20 11:40:39.088734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.032 [2024-11-20 11:40:39.089985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.032 [2024-11-20 11:40:39.090001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.032 [2024-11-20 11:40:39.090088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.032 [2024-11-20 11:40:39.090090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 [2024-11-20 11:40:39.217692] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc3d220/0xc41710) succeed. 00:17:36.032 [2024-11-20 11:40:39.226891] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc3e8b0/0xc82db0) succeed. 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 Malloc0 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 [2024-11-20 11:40:39.462244] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.032 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.032 [ 00:17:36.032 { 00:17:36.032 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.032 "subtype": "Discovery", 00:17:36.032 "listen_addresses": [ 00:17:36.032 { 00:17:36.032 "trtype": "RDMA", 00:17:36.032 "adrfam": "IPv4", 00:17:36.032 "traddr": "10.0.0.2", 00:17:36.032 "trsvcid": "4420" 00:17:36.032 } 00:17:36.032 ], 00:17:36.032 "allow_any_host": true, 00:17:36.032 "hosts": [] 00:17:36.032 }, 00:17:36.032 { 00:17:36.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.032 "subtype": "NVMe", 00:17:36.032 "listen_addresses": [ 00:17:36.032 { 00:17:36.033 "trtype": "RDMA", 00:17:36.033 "adrfam": "IPv4", 00:17:36.033 "traddr": "10.0.0.2", 00:17:36.033 "trsvcid": "4420" 00:17:36.033 } 00:17:36.033 ], 00:17:36.033 "allow_any_host": true, 00:17:36.033 "hosts": [], 00:17:36.033 "serial_number": "SPDK00000000000001", 00:17:36.033 "model_number": "SPDK bdev Controller", 00:17:36.033 "max_namespaces": 32, 00:17:36.033 "min_cntlid": 1, 00:17:36.033 "max_cntlid": 65519, 00:17:36.033 "namespaces": [ 00:17:36.033 { 00:17:36.033 "nsid": 1, 00:17:36.033 "bdev_name": "Malloc0", 00:17:36.033 "name": "Malloc0", 00:17:36.033 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:36.033 "eui64": "ABCDEF0123456789", 00:17:36.033 "uuid": "68f9331d-f9a4-43b0-a2d6-9b416f6e128c" 00:17:36.033 } 00:17:36.033 ] 00:17:36.033 } 00:17:36.033 ] 00:17:36.033 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.033 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:36.292 [2024-11-20 11:40:39.520727] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:36.292 [2024-11-20 11:40:39.520768] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663696 ] 00:17:36.292 [2024-11-20 11:40:39.583136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:36.292 [2024-11-20 11:40:39.583225] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:36.292 [2024-11-20 11:40:39.583245] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:36.292 [2024-11-20 11:40:39.583250] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:36.292 [2024-11-20 11:40:39.583281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:36.292 [2024-11-20 11:40:39.594418] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:36.292 [2024-11-20 11:40:39.604694] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:36.292 [2024-11-20 11:40:39.604705] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:36.292 [2024-11-20 11:40:39.604713] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604721] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604727] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604734] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604740] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604746] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604752] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604759] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604765] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604771] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604777] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604784] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604790] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604799] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604805] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604811] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604818] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604824] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.292 [2024-11-20 11:40:39.604830] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604836] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604843] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604849] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604855] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604861] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604867] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604874] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604880] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604886] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604892] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604898] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604905] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604910] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:36.293 [2024-11-20 11:40:39.604917] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:36.293 [2024-11-20 11:40:39.604921] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:36.293 [2024-11-20 11:40:39.604940] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.604955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610038] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610059] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610069] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:36.293 [2024-11-20 11:40:39.610077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610099] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610128] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610148] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610163] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610188] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610208] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610223] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610248] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610267] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610275] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610304] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610317] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:36.293 [2024-11-20 11:40:39.610323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610329] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610446] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:36.293 [2024-11-20 11:40:39.610452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610465] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610488] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:36.293 [2024-11-20 11:40:39.610507] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610515] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610542] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610555] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:36.293 [2024-11-20 11:40:39.610561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:36.293 [2024-11-20 11:40:39.610567] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:36.293 [2024-11-20 11:40:39.610583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:36.293 [2024-11-20 11:40:39.610594] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610641] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610657] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:36.293 [2024-11-20 11:40:39.610663] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:36.293 [2024-11-20 11:40:39.610669] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:36.293 [2024-11-20 11:40:39.610676] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:36.293 [2024-11-20 11:40:39.610682] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:36.293 [2024-11-20 11:40:39.610688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:36.293 [2024-11-20 11:40:39.610694] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:36.293 [2024-11-20 11:40:39.610715] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.293 [2024-11-20 11:40:39.610741] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.293 [2024-11-20 11:40:39.610746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:36.293 [2024-11-20 11:40:39.610756] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x1c1100 00:17:36.293 [2024-11-20 11:40:39.610763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.294 [2024-11-20 11:40:39.610771] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.294 [2024-11-20 11:40:39.610785] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.294 [2024-11-20 11:40:39.610799] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.294 [2024-11-20 11:40:39.610812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:36.294 [2024-11-20 11:40:39.610818] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:36.294 [2024-11-20 11:40:39.610837] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.294 [2024-11-20 11:40:39.610870] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.610883] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:36.294 [2024-11-20 11:40:39.610889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:36.294 [2024-11-20 11:40:39.610895] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610904] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610936] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.610942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.610951] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610963] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:36.294 [2024-11-20 11:40:39.610990] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.610998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611007] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.294 [2024-11-20 11:40:39.611029] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.611041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.611053] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611067] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611074] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.611079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.611085] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611092] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.611097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.611107] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611122] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.294 [2024-11-20 11:40:39.611144] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.294 [2024-11-20 11:40:39.611149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:36.294 [2024-11-20 11:40:39.611163] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.294 ===================================================== 00:17:36.294 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:36.294 ===================================================== 00:17:36.294 Controller Capabilities/Features 00:17:36.294 ================================ 00:17:36.294 Vendor ID: 0000 00:17:36.294 Subsystem Vendor ID: 0000 00:17:36.294 Serial Number: .................... 00:17:36.294 Model Number: ........................................ 00:17:36.294 Firmware Version: 25.01 00:17:36.294 Recommended Arb Burst: 0 00:17:36.294 IEEE OUI Identifier: 00 00 00 00:17:36.294 Multi-path I/O 00:17:36.294 May have multiple subsystem ports: No 00:17:36.294 May have multiple controllers: No 00:17:36.294 Associated with SR-IOV VF: No 00:17:36.294 Max Data Transfer Size: 131072 00:17:36.294 Max Number of Namespaces: 0 00:17:36.294 Max Number of I/O Queues: 1024 00:17:36.294 NVMe Specification Version (VS): 1.3 00:17:36.294 NVMe Specification Version (Identify): 1.3 00:17:36.294 Maximum Queue Entries: 128 00:17:36.294 Contiguous Queues Required: Yes 00:17:36.294 Arbitration Mechanisms Supported 00:17:36.294 Weighted Round Robin: Not Supported 00:17:36.294 Vendor Specific: Not Supported 00:17:36.294 Reset Timeout: 15000 ms 00:17:36.294 Doorbell Stride: 4 bytes 00:17:36.294 NVM Subsystem Reset: Not Supported 00:17:36.294 Command Sets Supported 00:17:36.294 NVM Command Set: Supported 00:17:36.294 Boot Partition: Not Supported 00:17:36.294 Memory Page Size Minimum: 4096 bytes 00:17:36.294 Memory Page Size Maximum: 4096 bytes 00:17:36.294 Persistent Memory Region: Not Supported 00:17:36.294 Optional Asynchronous Events Supported 00:17:36.294 Namespace Attribute Notices: Not Supported 00:17:36.294 Firmware Activation Notices: Not Supported 00:17:36.294 ANA Change Notices: Not Supported 00:17:36.294 PLE Aggregate Log Change Notices: Not Supported 00:17:36.294 LBA Status Info Alert Notices: Not Supported 00:17:36.294 EGE Aggregate Log Change Notices: Not Supported 00:17:36.294 Normal NVM Subsystem Shutdown event: Not Supported 00:17:36.294 Zone Descriptor Change Notices: Not Supported 00:17:36.294 Discovery Log Change Notices: Supported 00:17:36.294 Controller Attributes 00:17:36.294 128-bit Host Identifier: Not Supported 00:17:36.294 Non-Operational Permissive Mode: Not Supported 00:17:36.294 NVM Sets: Not Supported 00:17:36.294 Read Recovery Levels: Not Supported 00:17:36.294 Endurance Groups: Not Supported 00:17:36.294 Predictable Latency Mode: Not Supported 00:17:36.294 Traffic Based Keep ALive: Not Supported 00:17:36.294 Namespace Granularity: Not Supported 00:17:36.294 SQ Associations: Not Supported 00:17:36.294 UUID List: Not Supported 00:17:36.294 Multi-Domain Subsystem: Not Supported 00:17:36.294 Fixed Capacity Management: Not Supported 00:17:36.294 Variable Capacity Management: Not Supported 00:17:36.294 Delete Endurance Group: Not Supported 00:17:36.294 Delete NVM Set: Not Supported 00:17:36.294 Extended LBA Formats Supported: Not Supported 00:17:36.294 Flexible Data Placement Supported: Not Supported 00:17:36.294 00:17:36.294 Controller Memory Buffer Support 00:17:36.294 ================================ 00:17:36.294 Supported: No 00:17:36.294 00:17:36.294 Persistent Memory Region Support 00:17:36.294 ================================ 00:17:36.294 Supported: No 00:17:36.294 00:17:36.294 Admin Command Set Attributes 00:17:36.294 ============================ 00:17:36.294 Security Send/Receive: Not Supported 00:17:36.294 Format NVM: Not Supported 00:17:36.294 Firmware Activate/Download: Not Supported 00:17:36.294 Namespace Management: Not Supported 00:17:36.294 Device Self-Test: Not Supported 00:17:36.294 Directives: Not Supported 00:17:36.294 NVMe-MI: Not Supported 00:17:36.294 Virtualization Management: Not Supported 00:17:36.294 Doorbell Buffer Config: Not Supported 00:17:36.294 Get LBA Status Capability: Not Supported 00:17:36.294 Command & Feature Lockdown Capability: Not Supported 00:17:36.294 Abort Command Limit: 1 00:17:36.294 Async Event Request Limit: 4 00:17:36.294 Number of Firmware Slots: N/A 00:17:36.294 Firmware Slot 1 Read-Only: N/A 00:17:36.294 Firmware Activation Without Reset: N/A 00:17:36.294 Multiple Update Detection Support: N/A 00:17:36.294 Firmware Update Granularity: No Information Provided 00:17:36.294 Per-Namespace SMART Log: No 00:17:36.294 Asymmetric Namespace Access Log Page: Not Supported 00:17:36.294 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:36.294 Command Effects Log Page: Not Supported 00:17:36.295 Get Log Page Extended Data: Supported 00:17:36.295 Telemetry Log Pages: Not Supported 00:17:36.295 Persistent Event Log Pages: Not Supported 00:17:36.295 Supported Log Pages Log Page: May Support 00:17:36.295 Commands Supported & Effects Log Page: Not Supported 00:17:36.295 Feature Identifiers & Effects Log Page:May Support 00:17:36.295 NVMe-MI Commands & Effects Log Page: May Support 00:17:36.295 Data Area 4 for Telemetry Log: Not Supported 00:17:36.295 Error Log Page Entries Supported: 128 00:17:36.295 Keep Alive: Not Supported 00:17:36.295 00:17:36.295 NVM Command Set Attributes 00:17:36.295 ========================== 00:17:36.295 Submission Queue Entry Size 00:17:36.295 Max: 1 00:17:36.295 Min: 1 00:17:36.295 Completion Queue Entry Size 00:17:36.295 Max: 1 00:17:36.295 Min: 1 00:17:36.295 Number of Namespaces: 0 00:17:36.295 Compare Command: Not Supported 00:17:36.295 Write Uncorrectable Command: Not Supported 00:17:36.295 Dataset Management Command: Not Supported 00:17:36.295 Write Zeroes Command: Not Supported 00:17:36.295 Set Features Save Field: Not Supported 00:17:36.295 Reservations: Not Supported 00:17:36.295 Timestamp: Not Supported 00:17:36.295 Copy: Not Supported 00:17:36.295 Volatile Write Cache: Not Present 00:17:36.295 Atomic Write Unit (Normal): 1 00:17:36.295 Atomic Write Unit (PFail): 1 00:17:36.295 Atomic Compare & Write Unit: 1 00:17:36.295 Fused Compare & Write: Supported 00:17:36.295 Scatter-Gather List 00:17:36.295 SGL Command Set: Supported 00:17:36.295 SGL Keyed: Supported 00:17:36.295 SGL Bit Bucket Descriptor: Not Supported 00:17:36.295 SGL Metadata Pointer: Not Supported 00:17:36.295 Oversized SGL: Not Supported 00:17:36.295 SGL Metadata Address: Not Supported 00:17:36.295 SGL Offset: Supported 00:17:36.295 Transport SGL Data Block: Not Supported 00:17:36.295 Replay Protected Memory Block: Not Supported 00:17:36.295 00:17:36.295 Firmware Slot Information 00:17:36.295 ========================= 00:17:36.295 Active slot: 0 00:17:36.295 00:17:36.295 00:17:36.295 Error Log 00:17:36.295 ========= 00:17:36.295 00:17:36.295 Active Namespaces 00:17:36.295 ================= 00:17:36.295 Discovery Log Page 00:17:36.295 ================== 00:17:36.295 Generation Counter: 2 00:17:36.295 Number of Records: 2 00:17:36.295 Record Format: 0 00:17:36.295 00:17:36.295 Discovery Log Entry 0 00:17:36.295 ---------------------- 00:17:36.295 Transport Type: 1 (RDMA) 00:17:36.295 Address Family: 1 (IPv4) 00:17:36.295 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:36.295 Entry Flags: 00:17:36.295 Duplicate Returned Information: 1 00:17:36.295 Explicit Persistent Connection Support for Discovery: 1 00:17:36.295 Transport Requirements: 00:17:36.295 Secure Channel: Not Required 00:17:36.295 Port ID: 0 (0x0000) 00:17:36.295 Controller ID: 65535 (0xffff) 00:17:36.295 Admin Max SQ Size: 128 00:17:36.295 Transport Service Identifier: 4420 00:17:36.295 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:36.295 Transport Address: 10.0.0.2 00:17:36.295 Transport Specific Address Subtype - RDMA 00:17:36.295 RDMA QP Service Type: 1 (Reliable Connected) 00:17:36.295 RDMA Provider Type: 1 (No provider specified) 00:17:36.295 RDMA CM Service: 1 (RDMA_CM) 00:17:36.295 Discovery Log Entry 1 00:17:36.295 ---------------------- 00:17:36.295 Transport Type: 1 (RDMA) 00:17:36.295 Address Family: 1 (IPv4) 00:17:36.295 Subsystem Type: 2 (NVM Subsystem) 00:17:36.295 Entry Flags: 00:17:36.295 Duplicate Returned Information: 0 00:17:36.295 Explicit Persistent Connection Support for Discovery: 0 00:17:36.295 Transport Requirements: 00:17:36.295 Secure Channel: Not Required 00:17:36.295 Port ID: 0 (0x0000) 00:17:36.295 Controller ID: 65535 (0xffff) 00:17:36.295 Admin Max SQ Size: [2024-11-20 11:40:39.611235] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:36.295 [2024-11-20 11:40:39.611246] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12743 doesn't match qid 00:17:36.295 [2024-11-20 11:40:39.611260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32644 cdw0:2fcd1dd0 sqhd:5320 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611267] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12743 doesn't match qid 00:17:36.295 [2024-11-20 11:40:39.611275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32644 cdw0:2fcd1dd0 sqhd:5320 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611282] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12743 doesn't match qid 00:17:36.295 [2024-11-20 11:40:39.611292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32644 cdw0:2fcd1dd0 sqhd:5320 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611298] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 12743 doesn't match qid 00:17:36.295 [2024-11-20 11:40:39.611306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32644 cdw0:2fcd1dd0 sqhd:5320 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611315] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611342] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611357] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611371] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611387] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611401] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:36.295 [2024-11-20 11:40:39.611407] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:36.295 [2024-11-20 11:40:39.611413] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611422] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611446] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611458] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611468] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611494] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611506] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611515] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611539] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611553] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611562] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611586] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611598] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611607] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611637] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.295 [2024-11-20 11:40:39.611643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:36.295 [2024-11-20 11:40:39.611650] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611659] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.295 [2024-11-20 11:40:39.611667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.295 [2024-11-20 11:40:39.611683] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611695] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611704] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611734] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611746] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611755] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611779] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611791] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611830] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611844] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611853] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611880] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611892] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611901] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611925] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611937] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611946] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.611978] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.611983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.611990] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.611998] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612024] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612042] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612052] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612074] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612086] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612095] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612116] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612130] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612139] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612161] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612173] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612182] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612211] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612223] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612232] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612258] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612270] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612279] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612305] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612317] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612326] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612349] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612361] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612370] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612394] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612408] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612417] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612444] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612457] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612465] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612489] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.296 [2024-11-20 11:40:39.612495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:36.296 [2024-11-20 11:40:39.612501] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612510] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.296 [2024-11-20 11:40:39.612518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.296 [2024-11-20 11:40:39.612536] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612548] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612557] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612587] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612599] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612608] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612635] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612647] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612656] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612683] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612696] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612704] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612732] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612744] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612753] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612783] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612795] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612804] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612829] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612841] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612850] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612876] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612888] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612897] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612921] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612933] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612942] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.612969] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.612981] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612990] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.612998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613017] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613029] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613041] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613066] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613078] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613087] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613114] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613126] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613135] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613163] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613175] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613184] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613206] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613218] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613227] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.297 [2024-11-20 11:40:39.613238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.297 [2024-11-20 11:40:39.613258] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.297 [2024-11-20 11:40:39.613264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:36.297 [2024-11-20 11:40:39.613270] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613279] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613301] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613313] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613322] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613352] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613364] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613373] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613400] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613412] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613421] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613451] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613463] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613472] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613494] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613506] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613515] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613544] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613556] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613565] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613593] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613605] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613614] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613637] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613650] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613658] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613682] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613694] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613729] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613741] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613750] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613778] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613790] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613822] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613834] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613843] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613869] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613881] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613889] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613911] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613923] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613932] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.613958] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.613964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.613970] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613979] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.613987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.614003] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.614008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.614015] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.614024] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.618036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.618047] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.618052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:36.298 [2024-11-20 11:40:39.618059] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.618070] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.298 [2024-11-20 11:40:39.618078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.298 [2024-11-20 11:40:39.618097] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.298 [2024-11-20 11:40:39.618103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:17:36.299 [2024-11-20 11:40:39.618110] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.299 [2024-11-20 11:40:39.618117] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:17:36.299 128 00:17:36.299 Transport Service Identifier: 4420 00:17:36.299 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:36.299 Transport Address: 10.0.0.2 00:17:36.299 Transport Specific Address Subtype - RDMA 00:17:36.299 RDMA QP Service Type: 1 (Reliable Connected) 00:17:36.299 RDMA Provider Type: 1 (No provider specified) 00:17:36.299 RDMA CM Service: 1 (RDMA_CM) 00:17:36.299 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:36.299 [2024-11-20 11:40:39.695841] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:36.299 [2024-11-20 11:40:39.695888] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663698 ] 00:17:36.299 [2024-11-20 11:40:39.757072] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:36.299 [2024-11-20 11:40:39.757144] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:36.299 [2024-11-20 11:40:39.757163] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:36.299 [2024-11-20 11:40:39.757168] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:36.299 [2024-11-20 11:40:39.757195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:36.299 [2024-11-20 11:40:39.762526] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:36.561 [2024-11-20 11:40:39.777368] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:36.561 [2024-11-20 11:40:39.777384] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:36.561 [2024-11-20 11:40:39.777392] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777400] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777407] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777413] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777420] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777426] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777435] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777442] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777448] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777455] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777461] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777467] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777473] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777480] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777486] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777492] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777499] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777505] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777511] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777521] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777529] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777538] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777546] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777556] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777563] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777569] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777576] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777582] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777588] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777595] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777601] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777607] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:36.561 [2024-11-20 11:40:39.777613] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:36.561 [2024-11-20 11:40:39.777617] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:36.561 [2024-11-20 11:40:39.777633] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.777647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783040] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.561 [2024-11-20 11:40:39.783056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.561 [2024-11-20 11:40:39.783067] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783076] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:36.561 [2024-11-20 11:40:39.783084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:36.561 [2024-11-20 11:40:39.783091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:36.561 [2024-11-20 11:40:39.783106] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.561 [2024-11-20 11:40:39.783137] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.561 [2024-11-20 11:40:39.783143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:36.561 [2024-11-20 11:40:39.783150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:36.561 [2024-11-20 11:40:39.783156] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:36.561 [2024-11-20 11:40:39.783171] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.561 [2024-11-20 11:40:39.783195] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.561 [2024-11-20 11:40:39.783201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:36.561 [2024-11-20 11:40:39.783208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:36.561 [2024-11-20 11:40:39.783214] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:36.561 [2024-11-20 11:40:39.783229] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.561 [2024-11-20 11:40:39.783257] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.561 [2024-11-20 11:40:39.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:36.561 [2024-11-20 11:40:39.783270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:36.561 [2024-11-20 11:40:39.783276] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.561 [2024-11-20 11:40:39.783284] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783308] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783321] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:36.562 [2024-11-20 11:40:39.783330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:36.562 [2024-11-20 11:40:39.783336] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:36.562 [2024-11-20 11:40:39.783452] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:36.562 [2024-11-20 11:40:39.783459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:36.562 [2024-11-20 11:40:39.783468] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783494] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:36.562 [2024-11-20 11:40:39.783512] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783521] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783547] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783559] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:36.562 [2024-11-20 11:40:39.783565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783578] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:36.562 [2024-11-20 11:40:39.783587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783598] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783658] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:36.562 [2024-11-20 11:40:39.783664] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:36.562 [2024-11-20 11:40:39.783670] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:36.562 [2024-11-20 11:40:39.783677] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:36.562 [2024-11-20 11:40:39.783684] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:36.562 [2024-11-20 11:40:39.783690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783696] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783714] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783740] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783755] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.562 [2024-11-20 11:40:39.783769] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.562 [2024-11-20 11:40:39.783783] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.562 [2024-11-20 11:40:39.783798] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.562 [2024-11-20 11:40:39.783811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783817] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783836] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783864] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.783876] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:36.562 [2024-11-20 11:40:39.783882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783889] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.783916] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.783924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.562 [2024-11-20 11:40:39.783944] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.783949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.784002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.784008] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.784017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.784026] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.562 [2024-11-20 11:40:39.784056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x1c1100 00:17:36.562 [2024-11-20 11:40:39.784080] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.562 [2024-11-20 11:40:39.784086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:36.562 [2024-11-20 11:40:39.784100] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:36.562 [2024-11-20 11:40:39.784115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:36.562 [2024-11-20 11:40:39.784121] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784138] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784182] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784208] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784225] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784254] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784275] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784319] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:36.563 [2024-11-20 11:40:39.784325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:36.563 [2024-11-20 11:40:39.784332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:36.563 [2024-11-20 11:40:39.784348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.563 [2024-11-20 11:40:39.784363] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.563 [2024-11-20 11:40:39.784383] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784396] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784402] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784414] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784423] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.563 [2024-11-20 11:40:39.784448] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784460] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784471] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.563 [2024-11-20 11:40:39.784499] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784511] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784520] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.563 [2024-11-20 11:40:39.784551] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784563] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784594] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784611] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784627] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784661] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784668] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784684] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784690] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784703] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.563 [2024-11-20 11:40:39.784709] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.563 [2024-11-20 11:40:39.784716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:36.563 [2024-11-20 11:40:39.784725] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.563 ===================================================== 00:17:36.563 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.563 ===================================================== 00:17:36.563 Controller Capabilities/Features 00:17:36.563 ================================ 00:17:36.563 Vendor ID: 8086 00:17:36.563 Subsystem Vendor ID: 8086 00:17:36.563 Serial Number: SPDK00000000000001 00:17:36.563 Model Number: SPDK bdev Controller 00:17:36.563 Firmware Version: 25.01 00:17:36.563 Recommended Arb Burst: 6 00:17:36.563 IEEE OUI Identifier: e4 d2 5c 00:17:36.563 Multi-path I/O 00:17:36.563 May have multiple subsystem ports: Yes 00:17:36.563 May have multiple controllers: Yes 00:17:36.564 Associated with SR-IOV VF: No 00:17:36.564 Max Data Transfer Size: 131072 00:17:36.564 Max Number of Namespaces: 32 00:17:36.564 Max Number of I/O Queues: 127 00:17:36.564 NVMe Specification Version (VS): 1.3 00:17:36.564 NVMe Specification Version (Identify): 1.3 00:17:36.564 Maximum Queue Entries: 128 00:17:36.564 Contiguous Queues Required: Yes 00:17:36.564 Arbitration Mechanisms Supported 00:17:36.564 Weighted Round Robin: Not Supported 00:17:36.564 Vendor Specific: Not Supported 00:17:36.564 Reset Timeout: 15000 ms 00:17:36.564 Doorbell Stride: 4 bytes 00:17:36.564 NVM Subsystem Reset: Not Supported 00:17:36.564 Command Sets Supported 00:17:36.564 NVM Command Set: Supported 00:17:36.564 Boot Partition: Not Supported 00:17:36.564 Memory Page Size Minimum: 4096 bytes 00:17:36.564 Memory Page Size Maximum: 4096 bytes 00:17:36.564 Persistent Memory Region: Not Supported 00:17:36.564 Optional Asynchronous Events Supported 00:17:36.564 Namespace Attribute Notices: Supported 00:17:36.564 Firmware Activation Notices: Not Supported 00:17:36.564 ANA Change Notices: Not Supported 00:17:36.564 PLE Aggregate Log Change Notices: Not Supported 00:17:36.564 LBA Status Info Alert Notices: Not Supported 00:17:36.564 EGE Aggregate Log Change Notices: Not Supported 00:17:36.564 Normal NVM Subsystem Shutdown event: Not Supported 00:17:36.564 Zone Descriptor Change Notices: Not Supported 00:17:36.564 Discovery Log Change Notices: Not Supported 00:17:36.564 Controller Attributes 00:17:36.564 128-bit Host Identifier: Supported 00:17:36.564 Non-Operational Permissive Mode: Not Supported 00:17:36.564 NVM Sets: Not Supported 00:17:36.564 Read Recovery Levels: Not Supported 00:17:36.564 Endurance Groups: Not Supported 00:17:36.564 Predictable Latency Mode: Not Supported 00:17:36.564 Traffic Based Keep ALive: Not Supported 00:17:36.564 Namespace Granularity: Not Supported 00:17:36.564 SQ Associations: Not Supported 00:17:36.564 UUID List: Not Supported 00:17:36.564 Multi-Domain Subsystem: Not Supported 00:17:36.564 Fixed Capacity Management: Not Supported 00:17:36.564 Variable Capacity Management: Not Supported 00:17:36.564 Delete Endurance Group: Not Supported 00:17:36.564 Delete NVM Set: Not Supported 00:17:36.564 Extended LBA Formats Supported: Not Supported 00:17:36.564 Flexible Data Placement Supported: Not Supported 00:17:36.564 00:17:36.564 Controller Memory Buffer Support 00:17:36.564 ================================ 00:17:36.564 Supported: No 00:17:36.564 00:17:36.564 Persistent Memory Region Support 00:17:36.564 ================================ 00:17:36.564 Supported: No 00:17:36.564 00:17:36.564 Admin Command Set Attributes 00:17:36.564 ============================ 00:17:36.564 Security Send/Receive: Not Supported 00:17:36.564 Format NVM: Not Supported 00:17:36.564 Firmware Activate/Download: Not Supported 00:17:36.564 Namespace Management: Not Supported 00:17:36.564 Device Self-Test: Not Supported 00:17:36.564 Directives: Not Supported 00:17:36.564 NVMe-MI: Not Supported 00:17:36.564 Virtualization Management: Not Supported 00:17:36.564 Doorbell Buffer Config: Not Supported 00:17:36.564 Get LBA Status Capability: Not Supported 00:17:36.564 Command & Feature Lockdown Capability: Not Supported 00:17:36.564 Abort Command Limit: 4 00:17:36.564 Async Event Request Limit: 4 00:17:36.564 Number of Firmware Slots: N/A 00:17:36.564 Firmware Slot 1 Read-Only: N/A 00:17:36.564 Firmware Activation Without Reset: N/A 00:17:36.564 Multiple Update Detection Support: N/A 00:17:36.564 Firmware Update Granularity: No Information Provided 00:17:36.564 Per-Namespace SMART Log: No 00:17:36.564 Asymmetric Namespace Access Log Page: Not Supported 00:17:36.564 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:36.564 Command Effects Log Page: Supported 00:17:36.564 Get Log Page Extended Data: Supported 00:17:36.564 Telemetry Log Pages: Not Supported 00:17:36.564 Persistent Event Log Pages: Not Supported 00:17:36.564 Supported Log Pages Log Page: May Support 00:17:36.564 Commands Supported & Effects Log Page: Not Supported 00:17:36.564 Feature Identifiers & Effects Log Page:May Support 00:17:36.564 NVMe-MI Commands & Effects Log Page: May Support 00:17:36.564 Data Area 4 for Telemetry Log: Not Supported 00:17:36.564 Error Log Page Entries Supported: 128 00:17:36.564 Keep Alive: Supported 00:17:36.564 Keep Alive Granularity: 10000 ms 00:17:36.564 00:17:36.564 NVM Command Set Attributes 00:17:36.564 ========================== 00:17:36.564 Submission Queue Entry Size 00:17:36.564 Max: 64 00:17:36.564 Min: 64 00:17:36.564 Completion Queue Entry Size 00:17:36.564 Max: 16 00:17:36.564 Min: 16 00:17:36.564 Number of Namespaces: 32 00:17:36.564 Compare Command: Supported 00:17:36.564 Write Uncorrectable Command: Not Supported 00:17:36.564 Dataset Management Command: Supported 00:17:36.564 Write Zeroes Command: Supported 00:17:36.564 Set Features Save Field: Not Supported 00:17:36.564 Reservations: Supported 00:17:36.564 Timestamp: Not Supported 00:17:36.564 Copy: Supported 00:17:36.564 Volatile Write Cache: Present 00:17:36.564 Atomic Write Unit (Normal): 1 00:17:36.564 Atomic Write Unit (PFail): 1 00:17:36.564 Atomic Compare & Write Unit: 1 00:17:36.564 Fused Compare & Write: Supported 00:17:36.564 Scatter-Gather List 00:17:36.564 SGL Command Set: Supported 00:17:36.564 SGL Keyed: Supported 00:17:36.564 SGL Bit Bucket Descriptor: Not Supported 00:17:36.564 SGL Metadata Pointer: Not Supported 00:17:36.564 Oversized SGL: Not Supported 00:17:36.564 SGL Metadata Address: Not Supported 00:17:36.564 SGL Offset: Supported 00:17:36.564 Transport SGL Data Block: Not Supported 00:17:36.564 Replay Protected Memory Block: Not Supported 00:17:36.564 00:17:36.564 Firmware Slot Information 00:17:36.564 ========================= 00:17:36.564 Active slot: 1 00:17:36.564 Slot 1 Firmware Revision: 25.01 00:17:36.564 00:17:36.564 00:17:36.564 Commands Supported and Effects 00:17:36.564 ============================== 00:17:36.564 Admin Commands 00:17:36.564 -------------- 00:17:36.564 Get Log Page (02h): Supported 00:17:36.564 Identify (06h): Supported 00:17:36.564 Abort (08h): Supported 00:17:36.564 Set Features (09h): Supported 00:17:36.564 Get Features (0Ah): Supported 00:17:36.564 Asynchronous Event Request (0Ch): Supported 00:17:36.564 Keep Alive (18h): Supported 00:17:36.564 I/O Commands 00:17:36.564 ------------ 00:17:36.564 Flush (00h): Supported LBA-Change 00:17:36.564 Write (01h): Supported LBA-Change 00:17:36.564 Read (02h): Supported 00:17:36.564 Compare (05h): Supported 00:17:36.564 Write Zeroes (08h): Supported LBA-Change 00:17:36.564 Dataset Management (09h): Supported LBA-Change 00:17:36.564 Copy (19h): Supported LBA-Change 00:17:36.564 00:17:36.564 Error Log 00:17:36.564 ========= 00:17:36.564 00:17:36.564 Arbitration 00:17:36.564 =========== 00:17:36.564 Arbitration Burst: 1 00:17:36.564 00:17:36.564 Power Management 00:17:36.564 ================ 00:17:36.564 Number of Power States: 1 00:17:36.564 Current Power State: Power State #0 00:17:36.564 Power State #0: 00:17:36.564 Max Power: 0.00 W 00:17:36.564 Non-Operational State: Operational 00:17:36.564 Entry Latency: Not Reported 00:17:36.564 Exit Latency: Not Reported 00:17:36.564 Relative Read Throughput: 0 00:17:36.564 Relative Read Latency: 0 00:17:36.564 Relative Write Throughput: 0 00:17:36.564 Relative Write Latency: 0 00:17:36.564 Idle Power: Not Reported 00:17:36.564 Active Power: Not Reported 00:17:36.564 Non-Operational Permissive Mode: Not Supported 00:17:36.564 00:17:36.564 Health Information 00:17:36.564 ================== 00:17:36.564 Critical Warnings: 00:17:36.564 Available Spare Space: OK 00:17:36.564 Temperature: OK 00:17:36.564 Device Reliability: OK 00:17:36.564 Read Only: No 00:17:36.564 Volatile Memory Backup: OK 00:17:36.564 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:36.564 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:36.564 Available Spare: 0% 00:17:36.564 Available Spare Threshold: 0% 00:17:36.564 Life Percentage Used:[2024-11-20 11:40:39.784805] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x1c1100 00:17:36.564 [2024-11-20 11:40:39.784814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.784831] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.784837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784844] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.784873] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:36.565 [2024-11-20 11:40:39.784883] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29294 doesn't match qid 00:17:36.565 [2024-11-20 11:40:39.784897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32530 cdw0:2c8f6580 sqhd:1320 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784904] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29294 doesn't match qid 00:17:36.565 [2024-11-20 11:40:39.784912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32530 cdw0:2c8f6580 sqhd:1320 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784919] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29294 doesn't match qid 00:17:36.565 [2024-11-20 11:40:39.784926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32530 cdw0:2c8f6580 sqhd:1320 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784933] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 29294 doesn't match qid 00:17:36.565 [2024-11-20 11:40:39.784940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32530 cdw0:2c8f6580 sqhd:1320 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784949] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.784957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.784974] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.784988] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.784996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785002] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785017] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785030] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:36.565 [2024-11-20 11:40:39.785044] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:36.565 [2024-11-20 11:40:39.785050] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785059] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785085] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785097] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785106] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785132] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785145] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785154] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785177] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785190] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785199] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785227] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785239] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785248] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785276] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785289] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785298] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785324] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785337] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785348] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785375] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785388] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785396] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785422] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785435] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785443] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785469] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785481] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785490] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785518] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785530] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785538] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785562] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785574] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785583] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785612] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785625] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785635] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.565 [2024-11-20 11:40:39.785659] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.565 [2024-11-20 11:40:39.785664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:36.565 [2024-11-20 11:40:39.785671] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x1c1100 00:17:36.565 [2024-11-20 11:40:39.785680] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785707] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785719] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785728] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785757] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785770] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785778] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785804] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785816] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785825] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785851] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785863] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785871] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785910] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785919] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785943] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785955] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785964] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.785972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.785986] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.785998] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786007] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786030] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786046] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786055] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786082] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786094] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786103] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786125] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786137] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786146] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786173] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786187] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786196] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786227] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786239] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786248] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786272] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786284] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786293] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786318] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786331] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786339] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.566 [2024-11-20 11:40:39.786371] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.566 [2024-11-20 11:40:39.786376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:36.566 [2024-11-20 11:40:39.786383] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x1c1100 00:17:36.566 [2024-11-20 11:40:39.786392] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786423] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786435] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786444] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786469] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786483] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786492] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786515] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786527] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786536] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786558] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786570] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786579] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786608] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786621] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786629] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786657] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786669] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786678] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786708] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786720] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786728] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786758] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786771] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786780] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786808] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786820] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786829] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786856] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786868] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786877] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786907] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786919] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786927] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.786953] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.786959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.786965] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786974] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.786982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.787005] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.787011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.787017] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.787026] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.791039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:36.567 [2024-11-20 11:40:39.791057] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:36.567 [2024-11-20 11:40:39.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0004 p:0 m:0 dnr:0 00:17:36.567 [2024-11-20 11:40:39.791069] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x1c1100 00:17:36.567 [2024-11-20 11:40:39.791076] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:17:36.567 0% 00:17:36.567 Data Units Read: 0 00:17:36.567 Data Units Written: 0 00:17:36.567 Host Read Commands: 0 00:17:36.567 Host Write Commands: 0 00:17:36.567 Controller Busy Time: 0 minutes 00:17:36.567 Power Cycles: 0 00:17:36.567 Power On Hours: 0 hours 00:17:36.567 Unsafe Shutdowns: 0 00:17:36.567 Unrecoverable Media Errors: 0 00:17:36.567 Lifetime Error Log Entries: 0 00:17:36.567 Warning Temperature Time: 0 minutes 00:17:36.567 Critical Temperature Time: 0 minutes 00:17:36.567 00:17:36.567 Number of Queues 00:17:36.567 ================ 00:17:36.567 Number of I/O Submission Queues: 127 00:17:36.567 Number of I/O Completion Queues: 127 00:17:36.567 00:17:36.567 Active Namespaces 00:17:36.567 ================= 00:17:36.567 Namespace ID:1 00:17:36.567 Error Recovery Timeout: Unlimited 00:17:36.567 Command Set Identifier: NVM (00h) 00:17:36.567 Deallocate: Supported 00:17:36.567 Deallocated/Unwritten Error: Not Supported 00:17:36.567 Deallocated Read Value: Unknown 00:17:36.567 Deallocate in Write Zeroes: Not Supported 00:17:36.567 Deallocated Guard Field: 0xFFFF 00:17:36.567 Flush: Supported 00:17:36.567 Reservation: Supported 00:17:36.567 Namespace Sharing Capabilities: Multiple Controllers 00:17:36.567 Size (in LBAs): 131072 (0GiB) 00:17:36.567 Capacity (in LBAs): 131072 (0GiB) 00:17:36.567 Utilization (in LBAs): 131072 (0GiB) 00:17:36.567 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:36.567 EUI64: ABCDEF0123456789 00:17:36.567 UUID: 68f9331d-f9a4-43b0-a2d6-9b416f6e128c 00:17:36.567 Thin Provisioning: Not Supported 00:17:36.567 Per-NS Atomic Units: Yes 00:17:36.567 Atomic Boundary Size (Normal): 0 00:17:36.567 Atomic Boundary Size (PFail): 0 00:17:36.567 Atomic Boundary Offset: 0 00:17:36.567 Maximum Single Source Range Length: 65535 00:17:36.567 Maximum Copy Length: 65535 00:17:36.567 Maximum Source Range Count: 1 00:17:36.568 NGUID/EUI64 Never Reused: No 00:17:36.568 Namespace Write Protected: No 00:17:36.568 Number of LBA Formats: 1 00:17:36.568 Current LBA Format: LBA Format #00 00:17:36.568 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:36.568 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:17:36.568 rmmod nvme_rdma 00:17:36.568 rmmod nvme_fabrics 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 1663553 ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 1663553 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1663553 ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1663553 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1663553 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1663553' 00:17:36.568 killing process with pid 1663553 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1663553 00:17:36.568 11:40:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1663553 00:17:36.825 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-save 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-restore 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:36.826 00:17:36.826 real 0m7.634s 00:17:36.826 user 0m6.391s 00:17:36.826 sys 0m5.016s 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.826 11:40:40 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:36.826 ************************************ 00:17:36.826 END TEST nvmf_identify 00:17:36.826 ************************************ 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.085 ************************************ 00:17:37.085 START TEST nvmf_perf 00:17:37.085 ************************************ 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:37.085 * Looking for test storage... 00:17:37.085 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.085 --rc genhtml_branch_coverage=1 00:17:37.085 --rc genhtml_function_coverage=1 00:17:37.085 --rc genhtml_legend=1 00:17:37.085 --rc geninfo_all_blocks=1 00:17:37.085 --rc geninfo_unexecuted_blocks=1 00:17:37.085 00:17:37.085 ' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.085 --rc genhtml_branch_coverage=1 00:17:37.085 --rc genhtml_function_coverage=1 00:17:37.085 --rc genhtml_legend=1 00:17:37.085 --rc geninfo_all_blocks=1 00:17:37.085 --rc geninfo_unexecuted_blocks=1 00:17:37.085 00:17:37.085 ' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.085 --rc genhtml_branch_coverage=1 00:17:37.085 --rc genhtml_function_coverage=1 00:17:37.085 --rc genhtml_legend=1 00:17:37.085 --rc geninfo_all_blocks=1 00:17:37.085 --rc geninfo_unexecuted_blocks=1 00:17:37.085 00:17:37.085 ' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.085 --rc genhtml_branch_coverage=1 00:17:37.085 --rc genhtml_function_coverage=1 00:17:37.085 --rc genhtml_legend=1 00:17:37.085 --rc geninfo_all_blocks=1 00:17:37.085 --rc geninfo_unexecuted_blocks=1 00:17:37.085 00:17:37.085 ' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.085 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:37.344 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:17:37.344 11:40:40 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.909 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:43.910 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:43.910 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:43.910 Found net devices under 0000:18:00.0: mlx_0_0 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:43.910 Found net devices under 0000:18:00.1: mlx_0_1 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # get_rdma_if_list 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # rdma_devs=() 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@89 -- # continue 2 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@89 -- # continue 2 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@61 -- # uname 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_cm 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_core 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_umad 00:17:43.910 11:40:46 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe iw_cm 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # key_initiator=target1 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:17:43.910 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:17:43.911 10.0.0.1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:17:43.911 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:43.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:17:43.911 00:17:43.911 --- 10.0.0.2 ping statistics --- 00:17:43.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.911 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:43.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.024 ms 00:17:43.911 00:17:43.911 --- 10.0.0.2 ping statistics --- 00:17:43.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.911 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:43.911 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=1666660 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 1666660 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1666660 ']' 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.912 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 [2024-11-20 11:40:47.305508] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:17:43.912 [2024-11-20 11:40:47.305572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.171 [2024-11-20 11:40:47.385910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.171 [2024-11-20 11:40:47.431148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.171 [2024-11-20 11:40:47.431190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.171 [2024-11-20 11:40:47.431200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.171 [2024-11-20 11:40:47.431223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.171 [2024-11-20 11:40:47.431230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.171 [2024-11-20 11:40:47.432561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.171 [2024-11-20 11:40:47.432649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.171 [2024-11-20 11:40:47.432739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.171 [2024-11-20 11:40:47.432741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:44.171 11:40:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:47.452 11:40:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:47.452 11:40:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:47.452 11:40:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:17:47.452 11:40:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:47.710 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:47.710 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:17:47.711 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:47.711 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:17:47.711 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:17:47.969 [2024-11-20 11:40:51.248295] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:17:47.969 [2024-11-20 11:40:51.268279] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x245bc50/0x2331780) succeed. 00:17:47.969 [2024-11-20 11:40:51.277741] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x245d100/0x23b1440) succeed. 00:17:47.969 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.227 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:48.227 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.485 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:48.485 11:40:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:48.743 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:17:48.743 [2024-11-20 11:40:52.199472] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:17:49.003 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:17:49.003 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:17:49.003 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:17:49.003 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:49.003 11:40:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:17:50.379 Initializing NVMe Controllers 00:17:50.379 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:17:50.379 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:17:50.379 Initialization complete. Launching workers. 00:17:50.379 ======================================================== 00:17:50.379 Latency(us) 00:17:50.379 Device Information : IOPS MiB/s Average min max 00:17:50.379 PCIE (0000:5f:00.0) NSID 1 from core 0: 97790.86 382.00 326.89 29.98 7442.46 00:17:50.379 ======================================================== 00:17:50.379 Total : 97790.86 382.00 326.89 29.98 7442.46 00:17:50.379 00:17:50.379 11:40:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:53.744 Initializing NVMe Controllers 00:17:53.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.744 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.744 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:53.745 Initialization complete. Launching workers. 00:17:53.745 ======================================================== 00:17:53.745 Latency(us) 00:17:53.745 Device Information : IOPS MiB/s Average min max 00:17:53.745 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6713.99 26.23 148.73 47.70 7056.19 00:17:53.745 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5189.99 20.27 191.69 69.44 7070.67 00:17:53.745 ======================================================== 00:17:53.745 Total : 11903.99 46.50 167.46 47.70 7070.67 00:17:53.745 00:17:53.745 11:40:57 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:57.029 Initializing NVMe Controllers 00:17:57.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.029 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:57.029 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:57.029 Initialization complete. Launching workers. 00:17:57.029 ======================================================== 00:17:57.029 Latency(us) 00:17:57.029 Device Information : IOPS MiB/s Average min max 00:17:57.029 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18102.00 70.71 1767.29 498.64 5506.47 00:17:57.029 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7978.97 7738.69 8233.35 00:17:57.029 ======================================================== 00:17:57.029 Total : 22134.00 86.46 2898.83 498.64 8233.35 00:17:57.029 00:17:57.287 11:41:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:57.287 11:41:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:01.473 Initializing NVMe Controllers 00:18:01.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.473 Controller IO queue size 128, less than required. 00:18:01.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.473 Controller IO queue size 128, less than required. 00:18:01.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.473 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.473 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:01.473 Initialization complete. Launching workers. 00:18:01.473 ======================================================== 00:18:01.473 Latency(us) 00:18:01.473 Device Information : IOPS MiB/s Average min max 00:18:01.473 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3825.62 956.41 33572.78 16024.51 73802.51 00:18:01.473 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3994.09 998.52 31866.85 15408.97 56733.39 00:18:01.473 ======================================================== 00:18:01.473 Total : 7819.71 1954.93 32701.44 15408.97 73802.51 00:18:01.473 00:18:01.473 11:41:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:02.038 No valid NVMe controllers or AIO or URING devices found 00:18:02.038 Initializing NVMe Controllers 00:18:02.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.038 Controller IO queue size 128, less than required. 00:18:02.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.038 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:02.038 Controller IO queue size 128, less than required. 00:18:02.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.038 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:02.038 WARNING: Some requested NVMe devices were skipped 00:18:02.038 11:41:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:06.225 Initializing NVMe Controllers 00:18:06.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.225 Controller IO queue size 128, less than required. 00:18:06.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:06.225 Controller IO queue size 128, less than required. 00:18:06.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:06.225 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.225 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:06.225 Initialization complete. Launching workers. 00:18:06.225 00:18:06.225 ==================== 00:18:06.225 lcore 0, ns RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:06.225 RDMA transport: 00:18:06.225 dev name: mlx5_1 00:18:06.225 polls: 399544 00:18:06.225 idle_polls: 396457 00:18:06.225 completions: 41490 00:18:06.225 queued_requests: 1 00:18:06.225 total_send_wrs: 20745 00:18:06.225 send_doorbell_updates: 2841 00:18:06.225 total_recv_wrs: 20872 00:18:06.225 recv_doorbell_updates: 2843 00:18:06.225 --------------------------------- 00:18:06.225 00:18:06.225 ==================== 00:18:06.225 lcore 0, ns RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:06.225 RDMA transport: 00:18:06.225 dev name: mlx5_1 00:18:06.225 polls: 397009 00:18:06.225 idle_polls: 396752 00:18:06.225 completions: 19562 00:18:06.225 queued_requests: 1 00:18:06.225 total_send_wrs: 9781 00:18:06.225 send_doorbell_updates: 251 00:18:06.225 total_recv_wrs: 9908 00:18:06.225 recv_doorbell_updates: 252 00:18:06.225 --------------------------------- 00:18:06.225 ======================================================== 00:18:06.225 Latency(us) 00:18:06.225 Device Information : IOPS MiB/s Average min max 00:18:06.225 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5180.01 1295.00 24730.74 13044.66 61352.94 00:18:06.225 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2442.17 610.54 52324.57 30404.16 73077.45 00:18:06.225 ======================================================== 00:18:06.225 Total : 7622.18 1905.55 33571.90 13044.66 73077.45 00:18:06.225 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:06.483 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:18:06.483 rmmod nvme_rdma 00:18:06.483 rmmod nvme_fabrics 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 1666660 ']' 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 1666660 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1666660 ']' 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1666660 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.741 11:41:09 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1666660 00:18:06.741 11:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.741 11:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.741 11:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1666660' 00:18:06.741 killing process with pid 1666660 00:18:06.741 11:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1666660 00:18:06.741 11:41:10 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1666660 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-save 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-restore 00:18:10.927 00:18:10.927 real 0m33.591s 00:18:10.927 user 1m47.984s 00:18:10.927 sys 0m6.623s 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:10.927 ************************************ 00:18:10.927 END TEST nvmf_perf 00:18:10.927 ************************************ 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.927 11:41:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.927 ************************************ 00:18:10.927 START TEST nvmf_fio_host 00:18:10.927 ************************************ 00:18:10.927 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:10.927 * Looking for test storage... 00:18:10.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.928 --rc genhtml_branch_coverage=1 00:18:10.928 --rc genhtml_function_coverage=1 00:18:10.928 --rc genhtml_legend=1 00:18:10.928 --rc geninfo_all_blocks=1 00:18:10.928 --rc geninfo_unexecuted_blocks=1 00:18:10.928 00:18:10.928 ' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.928 --rc genhtml_branch_coverage=1 00:18:10.928 --rc genhtml_function_coverage=1 00:18:10.928 --rc genhtml_legend=1 00:18:10.928 --rc geninfo_all_blocks=1 00:18:10.928 --rc geninfo_unexecuted_blocks=1 00:18:10.928 00:18:10.928 ' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.928 --rc genhtml_branch_coverage=1 00:18:10.928 --rc genhtml_function_coverage=1 00:18:10.928 --rc genhtml_legend=1 00:18:10.928 --rc geninfo_all_blocks=1 00:18:10.928 --rc geninfo_unexecuted_blocks=1 00:18:10.928 00:18:10.928 ' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:10.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.928 --rc genhtml_branch_coverage=1 00:18:10.928 --rc genhtml_function_coverage=1 00:18:10.928 --rc genhtml_legend=1 00:18:10.928 --rc geninfo_all_blocks=1 00:18:10.928 --rc geninfo_unexecuted_blocks=1 00:18:10.928 00:18:10.928 ' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.928 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:10.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:18:10.929 11:41:14 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.487 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.487 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:18:17.487 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:17.487 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:17.487 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:17.488 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:17.488 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:17.488 Found net devices under 0000:18:00.0: mlx_0_0 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:17.488 Found net devices under 0000:18:00.1: mlx_0_1 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # get_rdma_if_list 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # rdma_devs=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@89 -- # continue 2 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@89 -- # continue 2 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@61 -- # uname 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_cm 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_core 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_umad 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe iw_cm 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:17.488 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # key_initiator=target1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:18:17.489 10.0.0.1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:18:17.489 10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:17.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:18:17.489 00:18:17.489 --- 10.0.0.2 ping statistics --- 00:18:17.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.489 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:17.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:18:17.489 00:18:17.489 --- 10.0.0.2 ping statistics --- 00:18:17.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.489 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:17.489 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1673034 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1673034 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1673034 ']' 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.490 11:41:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.490 [2024-11-20 11:41:20.398882] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:17.490 [2024-11-20 11:41:20.398951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.490 [2024-11-20 11:41:20.477841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.490 [2024-11-20 11:41:20.528271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.490 [2024-11-20 11:41:20.528322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.490 [2024-11-20 11:41:20.528332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.490 [2024-11-20 11:41:20.528340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.490 [2024-11-20 11:41:20.528348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.490 [2024-11-20 11:41:20.529654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.490 [2024-11-20 11:41:20.529743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.490 [2024-11-20 11:41:20.529820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.490 [2024-11-20 11:41:20.529822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.068 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.068 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:18.068 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:18.068 [2024-11-20 11:41:21.440951] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe88220/0xe8c710) succeed. 00:18:18.068 [2024-11-20 11:41:21.450192] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe898b0/0xecddb0) succeed. 00:18:18.326 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:18.326 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.326 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.326 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:18.583 Malloc1 00:18:18.583 11:41:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.841 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:18.841 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:18:19.099 [2024-11-20 11:41:22.483632] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:18:19.099 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:19.357 11:41:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:19.615 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:19.615 fio-3.35 00:18:19.616 Starting 1 thread 00:18:22.146 00:18:22.146 test: (groupid=0, jobs=1): err= 0: pid=1673512: Wed Nov 20 11:41:25 2024 00:18:22.146 read: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(136MiB/2004msec) 00:18:22.146 slat (nsec): min=1406, max=40343, avg=1552.46, stdev=440.38 00:18:22.146 clat (usec): min=2036, max=6604, avg=3654.18, stdev=106.42 00:18:22.146 lat (usec): min=2054, max=6606, avg=3655.73, stdev=106.34 00:18:22.146 clat percentiles (usec): 00:18:22.146 | 1.00th=[ 3294], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:18:22.146 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:18:22.146 | 70.00th=[ 3654], 80.00th=[ 3687], 90.00th=[ 3687], 95.00th=[ 3687], 00:18:22.146 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 4752], 99.95th=[ 5735], 00:18:22.146 | 99.99th=[ 6587] 00:18:22.146 bw ( KiB/s): min=68080, max=70272, per=100.00%, avg=69552.00, stdev=1004.06, samples=4 00:18:22.146 iops : min=17020, max=17568, avg=17388.00, stdev=251.01, samples=4 00:18:22.146 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2004msec); 0 zone resets 00:18:22.146 slat (nsec): min=1446, max=18275, avg=1895.35, stdev=515.80 00:18:22.146 clat (usec): min=2063, max=6592, avg=3652.44, stdev=105.86 00:18:22.146 lat (usec): min=2072, max=6594, avg=3654.33, stdev=105.79 00:18:22.146 clat percentiles (usec): 00:18:22.146 | 1.00th=[ 3294], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:18:22.146 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:18:22.146 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3687], 95.00th=[ 3687], 00:18:22.146 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 4752], 99.95th=[ 5669], 00:18:22.146 | 99.99th=[ 6521] 00:18:22.146 bw ( KiB/s): min=68272, max=70112, per=100.00%, avg=69616.00, stdev=896.64, samples=4 00:18:22.146 iops : min=17068, max=17528, avg=17404.00, stdev=224.16, samples=4 00:18:22.146 lat (msec) : 4=99.18%, 10=0.82% 00:18:22.146 cpu : usr=99.50%, sys=0.05%, ctx=16, majf=0, minf=4 00:18:22.146 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:22.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.146 issued rwts: total=34832,34879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.146 00:18:22.146 Run status group 0 (all jobs): 00:18:22.146 READ: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=136MiB (143MB), run=2004-2004msec 00:18:22.146 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2004-2004msec 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:22.146 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:22.147 11:41:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:22.405 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:22.405 fio-3.35 00:18:22.405 Starting 1 thread 00:18:24.935 00:18:24.935 test: (groupid=0, jobs=1): err= 0: pid=1673969: Wed Nov 20 11:41:28 2024 00:18:24.935 read: IOPS=14.0k, BW=219MiB/s (230MB/s)(428MiB/1957msec) 00:18:24.935 slat (nsec): min=2306, max=57851, avg=2746.26, stdev=1456.17 00:18:24.935 clat (usec): min=279, max=11768, avg=1674.69, stdev=1453.92 00:18:24.935 lat (usec): min=281, max=11775, avg=1677.44, stdev=1454.61 00:18:24.935 clat percentiles (usec): 00:18:24.935 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 848], 20.00th=[ 930], 00:18:24.935 | 30.00th=[ 996], 40.00th=[ 1074], 50.00th=[ 1188], 60.00th=[ 1303], 00:18:24.935 | 70.00th=[ 1434], 80.00th=[ 1631], 90.00th=[ 4178], 95.00th=[ 5145], 00:18:24.935 | 99.00th=[ 7308], 99.50th=[ 8029], 99.90th=[10552], 99.95th=[11207], 00:18:24.935 | 99.99th=[11731] 00:18:24.935 bw ( KiB/s): min=108096, max=114880, per=49.28%, avg=110472.00, stdev=3197.27, samples=4 00:18:24.935 iops : min= 6756, max= 7180, avg=6904.50, stdev=199.83, samples=4 00:18:24.935 write: IOPS=8076, BW=126MiB/s (132MB/s)(225MiB/1781msec); 0 zone resets 00:18:24.935 slat (usec): min=26, max=142, avg=30.71, stdev= 6.97 00:18:24.935 clat (usec): min=4236, max=21868, avg=12792.28, stdev=1855.54 00:18:24.935 lat (usec): min=4266, max=21899, avg=12823.00, stdev=1854.54 00:18:24.935 clat percentiles (usec): 00:18:24.935 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[10552], 20.00th=[11338], 00:18:24.935 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:18:24.935 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[15795], 00:18:24.935 | 99.00th=[17695], 99.50th=[18220], 99.90th=[20055], 99.95th=[20841], 00:18:24.935 | 99.99th=[21627] 00:18:24.935 bw ( KiB/s): min=109472, max=119328, per=88.31%, avg=114128.00, stdev=4052.48, samples=4 00:18:24.935 iops : min= 6842, max= 7458, avg=7133.00, stdev=253.28, samples=4 00:18:24.935 lat (usec) : 500=0.02%, 750=1.89%, 1000=18.09% 00:18:24.935 lat (msec) : 2=37.13%, 4=1.79%, 10=8.30%, 20=32.74%, 50=0.04% 00:18:24.935 cpu : usr=96.36%, sys=2.00%, ctx=184, majf=0, minf=3 00:18:24.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:24.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.935 issued rwts: total=27419,14385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.935 00:18:24.935 Run status group 0 (all jobs): 00:18:24.935 READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=428MiB (449MB), run=1957-1957msec 00:18:24.935 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=225MiB (236MB), run=1781-1781msec 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:18:24.936 rmmod nvme_rdma 00:18:24.936 rmmod nvme_fabrics 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 1673034 ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 1673034 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1673034 ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1673034 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.936 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673034 00:18:25.195 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.195 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.195 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673034' 00:18:25.195 killing process with pid 1673034 00:18:25.195 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1673034 00:18:25.195 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1673034 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:18:25.454 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-save 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-restore 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:25.455 00:18:25.455 real 0m14.735s 00:18:25.455 user 0m45.256s 00:18:25.455 sys 0m5.692s 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.455 ************************************ 00:18:25.455 END TEST nvmf_fio_host 00:18:25.455 ************************************ 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.455 ************************************ 00:18:25.455 START TEST nvmf_failover 00:18:25.455 ************************************ 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:25.455 * Looking for test storage... 00:18:25.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.455 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:25.714 11:41:28 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.714 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.714 --rc genhtml_branch_coverage=1 00:18:25.714 --rc genhtml_function_coverage=1 00:18:25.714 --rc genhtml_legend=1 00:18:25.714 --rc geninfo_all_blocks=1 00:18:25.715 --rc geninfo_unexecuted_blocks=1 00:18:25.715 00:18:25.715 ' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.715 --rc genhtml_branch_coverage=1 00:18:25.715 --rc genhtml_function_coverage=1 00:18:25.715 --rc genhtml_legend=1 00:18:25.715 --rc geninfo_all_blocks=1 00:18:25.715 --rc geninfo_unexecuted_blocks=1 00:18:25.715 00:18:25.715 ' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.715 --rc genhtml_branch_coverage=1 00:18:25.715 --rc genhtml_function_coverage=1 00:18:25.715 --rc genhtml_legend=1 00:18:25.715 --rc geninfo_all_blocks=1 00:18:25.715 --rc geninfo_unexecuted_blocks=1 00:18:25.715 00:18:25.715 ' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.715 --rc genhtml_branch_coverage=1 00:18:25.715 --rc genhtml_function_coverage=1 00:18:25.715 --rc genhtml_legend=1 00:18:25.715 --rc geninfo_all_blocks=1 00:18:25.715 --rc geninfo_unexecuted_blocks=1 00:18:25.715 00:18:25.715 ' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:25.715 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:18:25.715 11:41:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:18:32.282 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:32.283 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:32.283 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:32.283 Found net devices under 0000:18:00.0: mlx_0_0 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:32.283 Found net devices under 0000:18:00.1: mlx_0_1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # get_rdma_if_list 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # rdma_devs=() 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@89 -- # continue 2 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@89 -- # continue 2 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@61 -- # uname 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_cm 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_core 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_umad 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe iw_cm 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # key_initiator=target1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:18:32.283 10.0.0.1 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:32.283 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:18:32.284 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:32.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.018 ms 00:18:32.284 00:18:32.284 --- 10.0.0.2 ping statistics --- 00:18:32.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.284 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:32.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.022 ms 00:18:32.284 00:18:32.284 --- 10.0.0.2 ping statistics --- 00:18:32.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.284 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:18:32.284 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=1677236 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 1677236 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1677236 ']' 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 [2024-11-20 11:41:35.368551] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:32.285 [2024-11-20 11:41:35.368606] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.285 [2024-11-20 11:41:35.447976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.285 [2024-11-20 11:41:35.495603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.285 [2024-11-20 11:41:35.495643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.285 [2024-11-20 11:41:35.495653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.285 [2024-11-20 11:41:35.495677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.285 [2024-11-20 11:41:35.495685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.285 [2024-11-20 11:41:35.497028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.285 [2024-11-20 11:41:35.497107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.285 [2024-11-20 11:41:35.497109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.285 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:32.544 [2024-11-20 11:41:35.844422] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x149c9f0/0x14a0ee0) succeed. 00:18:32.544 [2024-11-20 11:41:35.853465] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x149dfe0/0x14e2580) succeed. 00:18:32.544 11:41:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:32.803 Malloc0 00:18:32.803 11:41:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:33.061 11:41:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.319 11:41:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:18:33.319 [2024-11-20 11:41:36.786560] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:18:33.630 11:41:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 00:18:33.630 [2024-11-20 11:41:36.987132] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:18:33.630 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4422 00:18:33.929 [2024-11-20 11:41:37.187836] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4422 *** 00:18:33.929 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1677455 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1677455 /var/tmp/bdevperf.sock 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1677455 ']' 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.930 11:41:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:34.863 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.863 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:34.863 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:35.121 NVMe0n1 00:18:35.121 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:35.379 00:18:35.379 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.379 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1677639 00:18:35.379 11:41:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:36.362 11:41:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:18:36.621 11:41:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:39.906 11:41:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:39.906 00:18:39.906 11:41:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 00:18:40.164 11:41:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:43.448 11:41:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:18:43.448 [2024-11-20 11:41:46.570833] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:18:43.448 11:41:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:44.385 11:41:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4422 00:18:44.385 11:41:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1677639 00:18:50.953 { 00:18:50.953 "results": [ 00:18:50.953 { 00:18:50.953 "job": "NVMe0n1", 00:18:50.953 "core_mask": "0x1", 00:18:50.953 "workload": "verify", 00:18:50.953 "status": "finished", 00:18:50.953 "verify_range": { 00:18:50.953 "start": 0, 00:18:50.953 "length": 16384 00:18:50.953 }, 00:18:50.953 "queue_depth": 128, 00:18:50.953 "io_size": 4096, 00:18:50.953 "runtime": 15.004645, 00:18:50.953 "iops": 14151.351131599582, 00:18:50.953 "mibps": 55.27871535781087, 00:18:50.953 "io_failed": 4628, 00:18:50.953 "io_timeout": 0, 00:18:50.953 "avg_latency_us": 8830.334138703034, 00:18:50.953 "min_latency_us": 365.0782608695652, 00:18:50.953 "max_latency_us": 1035810.7269565217 00:18:50.953 } 00:18:50.953 ], 00:18:50.953 "core_count": 1 00:18:50.953 } 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1677455 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1677455 ']' 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1677455 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677455 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677455' 00:18:50.953 killing process with pid 1677455 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1677455 00:18:50.953 11:41:53 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1677455 00:18:50.953 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:50.953 [2024-11-20 11:41:37.265821] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:50.953 [2024-11-20 11:41:37.265890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1677455 ] 00:18:50.953 [2024-11-20 11:41:37.344992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.953 [2024-11-20 11:41:37.389943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.953 Running I/O for 15 seconds... 00:18:50.953 17792.00 IOPS, 69.50 MiB/s [2024-11-20T10:41:54.433Z] 9714.50 IOPS, 37.95 MiB/s [2024-11-20T10:41:54.433Z] [2024-11-20 11:41:40.840047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x1bef00 00:18:50.953 [2024-11-20 11:41:40.840631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.840987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.953 [2024-11-20 11:41:40.840997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.953 [2024-11-20 11:41:40.841007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.841982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.841991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.954 [2024-11-20 11:41:40.842464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.954 [2024-11-20 11:41:40.842474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.842590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:40.842599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.844331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.955 [2024-11-20 11:41:40.844347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.955 [2024-11-20 11:41:40.844355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:8 PRP1 0x0 PRP2 0x0 00:18:50.955 [2024-11-20 11:41:40.844365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:40.844413] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:50.955 [2024-11-20 11:41:40.844425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:50.955 [2024-11-20 11:41:40.847239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:50.955 [2024-11-20 11:41:40.861935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.955 [2024-11-20 11:41:40.898006] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:50.955 11519.33 IOPS, 45.00 MiB/s [2024-11-20T10:41:54.435Z] 13113.00 IOPS, 51.22 MiB/s [2024-11-20T10:41:54.435Z] 12638.40 IOPS, 49.37 MiB/s [2024-11-20T10:41:54.435Z] [2024-11-20 11:41:44.374416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.955 [2024-11-20 11:41:44.374466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.374478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.955 [2024-11-20 11:41:44.374488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.374497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.955 [2024-11-20 11:41:44.374506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.374516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.955 [2024-11-20 11:41:44.374525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.955 [2024-11-20 11:41:44.376201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:50.955 [2024-11-20 11:41:44.376214] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:50.955 [2024-11-20 11:41:44.376223] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] already in failed state 00:18:50.955 [2024-11-20 11:41:44.376242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.376731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.376969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.376979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.955 [2024-11-20 11:41:44.377688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.377976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.378008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.378020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.378057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.378067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.378099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.378109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.378141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x1bf100 00:18:50.955 [2024-11-20 11:41:44.378151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.955 [2024-11-20 11:41:44.378183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.378727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.378961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.378971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.379687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.379962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.379974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.956 [2024-11-20 11:41:44.380385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.380968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.381000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.381010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.381045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.381056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.381087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.381097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.381129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x1bf100 00:18:50.956 [2024-11-20 11:41:44.381139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.956 [2024-11-20 11:41:44.381171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x1bf100 00:18:50.957 [2024-11-20 11:41:44.381432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:44.381473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.381504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:44.381514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.395928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.957 [2024-11-20 11:41:44.395948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.957 [2024-11-20 11:41:44.395957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124272 len:8 PRP1 0x0 PRP2 0x0 00:18:50.957 [2024-11-20 11:41:44.395967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:44.396045] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:18:50.957 [2024-11-20 11:41:44.396086] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:18:50.957 [2024-11-20 11:41:44.398846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:50.957 [2024-11-20 11:41:44.439747] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:50.957 11473.00 IOPS, 44.82 MiB/s [2024-11-20T10:41:54.437Z] 12418.29 IOPS, 48.51 MiB/s [2024-11-20T10:41:54.437Z] 13123.38 IOPS, 51.26 MiB/s [2024-11-20T10:41:54.437Z] 13672.89 IOPS, 53.41 MiB/s [2024-11-20T10:41:54.437Z] 12328.80 IOPS, 48.16 MiB/s [2024-11-20T10:41:54.437Z] [2024-11-20 11:41:48.780840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.780880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.780900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.780910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.780922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.780931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.780943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.780952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.780963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.780983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.957 [2024-11-20 11:41:48.781832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x1bef00 00:18:50.957 [2024-11-20 11:41:48.781929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.957 [2024-11-20 11:41:48.781939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.781948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.781960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.781969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.781988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.781999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:50.958 [2024-11-20 11:41:48.782931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.782989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.782999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x1bef00 00:18:50.958 [2024-11-20 11:41:48.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.958 [2024-11-20 11:41:48.783382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x1bef00 00:18:50.959 [2024-11-20 11:41:48.783391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:74dd1000 sqhd:7250 p:0 m:0 dnr:0 00:18:50.959 [2024-11-20 11:41:48.785160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:50.959 [2024-11-20 11:41:48.785173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:50.959 [2024-11-20 11:41:48.785182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93792 len:8 PRP1 0x0 PRP2 0x0 00:18:50.959 [2024-11-20 11:41:48.785192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.959 [2024-11-20 11:41:48.785239] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:50.959 [2024-11-20 11:41:48.785252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:50.959 [2024-11-20 11:41:48.788045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:50.959 [2024-11-20 11:41:48.802482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:18:50.959 [2024-11-20 11:41:48.842202] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:50.959 12723.64 IOPS, 49.70 MiB/s [2024-11-20T10:41:54.439Z] 13163.67 IOPS, 51.42 MiB/s [2024-11-20T10:41:54.439Z] 13542.77 IOPS, 52.90 MiB/s [2024-11-20T10:41:54.439Z] 13869.50 IOPS, 54.18 MiB/s [2024-11-20T10:41:54.439Z] 14151.40 IOPS, 55.28 MiB/s 00:18:50.959 Latency(us) 00:18:50.959 [2024-11-20T10:41:54.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.959 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:50.959 Verification LBA range: start 0x0 length 0x4000 00:18:50.959 NVMe0n1 : 15.00 14151.35 55.28 308.44 0.00 8830.33 365.08 1035810.73 00:18:50.959 [2024-11-20T10:41:54.439Z] =================================================================================================================== 00:18:50.959 [2024-11-20T10:41:54.439Z] Total : 14151.35 55.28 308.44 0.00 8830.33 365.08 1035810.73 00:18:50.959 Received shutdown signal, test time was about 15.000000 seconds 00:18:50.959 00:18:50.959 Latency(us) 00:18:50.959 [2024-11-20T10:41:54.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.959 [2024-11-20T10:41:54.439Z] =================================================================================================================== 00:18:50.959 [2024-11-20T10:41:54.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1679641 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1679641 /var/tmp/bdevperf.sock 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1679641 ']' 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:50.959 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 00:18:51.216 [2024-11-20 11:41:54.514210] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:18:51.217 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4422 00:18:51.474 [2024-11-20 11:41:54.714897] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4422 *** 00:18:51.475 11:41:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:51.732 NVMe0n1 00:18:51.732 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:51.990 00:18:51.990 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:52.247 00:18:52.247 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.247 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:52.513 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:52.776 11:41:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:56.059 11:41:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:56.059 11:41:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:56.059 11:41:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1680372 00:18:56.059 11:41:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.059 11:41:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1680372 00:18:56.993 { 00:18:56.993 "results": [ 00:18:56.993 { 00:18:56.994 "job": "NVMe0n1", 00:18:56.994 "core_mask": "0x1", 00:18:56.994 "workload": "verify", 00:18:56.994 "status": "finished", 00:18:56.994 "verify_range": { 00:18:56.994 "start": 0, 00:18:56.994 "length": 16384 00:18:56.994 }, 00:18:56.994 "queue_depth": 128, 00:18:56.994 "io_size": 4096, 00:18:56.994 "runtime": 1.00893, 00:18:56.994 "iops": 17761.39078033164, 00:18:56.994 "mibps": 69.38043273567047, 00:18:56.994 "io_failed": 0, 00:18:56.994 "io_timeout": 0, 00:18:56.994 "avg_latency_us": 7166.985937888199, 00:18:56.994 "min_latency_us": 2535.958260869565, 00:18:56.994 "max_latency_us": 15044.786086956521 00:18:56.994 } 00:18:56.994 ], 00:18:56.994 "core_count": 1 00:18:56.994 } 00:18:56.994 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:56.994 [2024-11-20 11:41:54.102550] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:56.994 [2024-11-20 11:41:54.102621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679641 ] 00:18:56.994 [2024-11-20 11:41:54.181187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.994 [2024-11-20 11:41:54.224320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.994 [2024-11-20 11:41:55.961123] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:56.994 [2024-11-20 11:41:55.961651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:56.994 [2024-11-20 11:41:55.961686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:56.994 [2024-11-20 11:41:55.984914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:18:56.994 [2024-11-20 11:41:56.001117] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:56.994 Running I/O for 1 seconds... 00:18:56.994 17748.00 IOPS, 69.33 MiB/s 00:18:56.994 Latency(us) 00:18:56.994 [2024-11-20T10:42:00.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.994 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:56.994 Verification LBA range: start 0x0 length 0x4000 00:18:56.994 NVMe0n1 : 1.01 17761.39 69.38 0.00 0.00 7166.99 2535.96 15044.79 00:18:56.994 [2024-11-20T10:42:00.474Z] =================================================================================================================== 00:18:56.994 [2024-11-20T10:42:00.474Z] Total : 17761.39 69.38 0.00 0.00 7166.99 2535.96 15044.79 00:18:56.994 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:56.994 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:57.252 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:57.510 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.510 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:57.510 11:42:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:57.768 11:42:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1679641 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1679641 ']' 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1679641 00:19:01.050 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679641 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679641' 00:19:01.051 killing process with pid 1679641 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1679641 00:19:01.051 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1679641 00:19:01.309 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:01.309 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:19:01.567 rmmod nvme_rdma 00:19:01.567 rmmod nvme_fabrics 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 1677236 ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 1677236 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1677236 ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1677236 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677236 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677236' 00:19:01.567 killing process with pid 1677236 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1677236 00:19:01.567 11:42:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1677236 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-save 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-restore 00:19:01.826 00:19:01.826 real 0m36.460s 00:19:01.826 user 2m2.694s 00:19:01.826 sys 0m7.105s 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.826 11:42:05 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 ************************************ 00:19:01.826 END TEST nvmf_failover 00:19:01.826 ************************************ 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.085 ************************************ 00:19:02.085 START TEST nvmf_host_multipath_status 00:19:02.085 ************************************ 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:02.085 * Looking for test storage... 00:19:02.085 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.085 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.086 --rc genhtml_branch_coverage=1 00:19:02.086 --rc genhtml_function_coverage=1 00:19:02.086 --rc genhtml_legend=1 00:19:02.086 --rc geninfo_all_blocks=1 00:19:02.086 --rc geninfo_unexecuted_blocks=1 00:19:02.086 00:19:02.086 ' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.086 --rc genhtml_branch_coverage=1 00:19:02.086 --rc genhtml_function_coverage=1 00:19:02.086 --rc genhtml_legend=1 00:19:02.086 --rc geninfo_all_blocks=1 00:19:02.086 --rc geninfo_unexecuted_blocks=1 00:19:02.086 00:19:02.086 ' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.086 --rc genhtml_branch_coverage=1 00:19:02.086 --rc genhtml_function_coverage=1 00:19:02.086 --rc genhtml_legend=1 00:19:02.086 --rc geninfo_all_blocks=1 00:19:02.086 --rc geninfo_unexecuted_blocks=1 00:19:02.086 00:19:02.086 ' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.086 --rc genhtml_branch_coverage=1 00:19:02.086 --rc genhtml_function_coverage=1 00:19:02.086 --rc genhtml_legend=1 00:19:02.086 --rc geninfo_all_blocks=1 00:19:02.086 --rc geninfo_unexecuted_blocks=1 00:19:02.086 00:19:02.086 ' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:02.086 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:02.086 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:02.087 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:02.345 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:02.345 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:02.345 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:19:02.345 11:42:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:19:08.924 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:08.925 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:08.925 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:08.925 Found net devices under 0000:18:00.0: mlx_0_0 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:08.925 Found net devices under 0000:18:00.1: mlx_0_1 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # get_rdma_if_list 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # rdma_devs=() 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@89 -- # continue 2 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@89 -- # continue 2 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@61 -- # uname 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_cm 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_core 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_umad 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe iw_cm 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:19:08.925 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # key_initiator=target1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:19:08.926 10.0.0.1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:19:08.926 10.0.0.2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:08.926 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:08.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:19:08.926 00:19:08.926 --- 10.0.0.2 ping statistics --- 00:19:08.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.926 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:08.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:19:08.927 00:19:08.927 --- 10.0.0.2 ping statistics --- 00:19:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.927 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.927 11:42:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:19:08.927 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:19:08.927 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:19:08.927 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:19:08.927 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=1684526 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 1684526 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1684526 ']' 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:08.928 [2024-11-20 11:42:12.107283] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:08.928 [2024-11-20 11:42:12.107347] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.928 [2024-11-20 11:42:12.184729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:08.928 [2024-11-20 11:42:12.231215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.928 [2024-11-20 11:42:12.231258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.928 [2024-11-20 11:42:12.231268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.928 [2024-11-20 11:42:12.231277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.928 [2024-11-20 11:42:12.231284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.928 [2024-11-20 11:42:12.234055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.928 [2024-11-20 11:42:12.234059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1684526 00:19:08.928 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:09.187 [2024-11-20 11:42:12.575765] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc98b60/0xc9d050) succeed. 00:19:09.187 [2024-11-20 11:42:12.584646] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc9a0b0/0xcde6f0) succeed. 00:19:09.446 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:09.446 Malloc0 00:19:09.446 11:42:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:09.706 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.965 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:19:09.965 [2024-11-20 11:42:13.415984] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:19:09.965 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 00:19:10.223 [2024-11-20 11:42:13.612432] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4421 *** 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1684771 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1684771 /var/tmp/bdevperf.sock 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1684771 ']' 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.224 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:10.484 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.484 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:10.484 11:42:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:10.743 11:42:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:11.002 Nvme0n1 00:19:11.002 11:42:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:11.262 Nvme0n1 00:19:11.262 11:42:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:11.262 11:42:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:13.168 11:42:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:13.168 11:42:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n optimized 00:19:13.427 11:42:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n optimized 00:19:13.687 11:42:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:14.625 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:14.626 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:14.626 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.626 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.885 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.885 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:14.885 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.885 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:15.145 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.145 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:15.145 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:15.145 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.405 11:42:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.664 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.664 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:15.664 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.664 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:15.923 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.923 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:15.923 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:16.182 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n optimized 00:19:16.182 11:42:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.558 11:42:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.817 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.075 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.075 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:18.075 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.075 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:18.334 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.334 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:18.334 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.334 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:18.592 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.592 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:18.592 11:42:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:18.592 11:42:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n non_optimized 00:19:18.850 11:42:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:19.785 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:19.785 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.785 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:19.785 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.044 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.044 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:20.044 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.044 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:20.303 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:20.303 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:20.303 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.303 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.563 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.563 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.563 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.563 11:42:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.823 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.088 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.088 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:21.088 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:21.347 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n inaccessible 00:19:21.606 11:42:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:22.543 11:42:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:22.543 11:42:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:22.543 11:42:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.543 11:42:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.804 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:23.064 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.064 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:23.064 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.064 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:23.324 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.324 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:23.324 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.324 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:23.583 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.583 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:23.583 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:23.583 11:42:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.842 11:42:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.842 11:42:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:23.842 11:42:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n inaccessible 00:19:23.842 11:42:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n inaccessible 00:19:24.101 11:42:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:25.038 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:25.038 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:25.038 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.038 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:25.296 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.296 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:25.296 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.296 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:25.554 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.554 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:25.554 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.555 11:42:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.813 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.072 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:26.072 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:26.072 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.072 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:26.330 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:26.330 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:26.330 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n inaccessible 00:19:26.589 11:42:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n optimized 00:19:26.589 11:42:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:27.967 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:27.967 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:27.967 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.968 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:27.968 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:27.968 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:27.968 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.968 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.226 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.484 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.484 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:28.484 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.484 11:42:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.743 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:28.743 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:28.743 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.743 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:29.001 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.001 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:29.001 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:29.001 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n optimized 00:19:29.260 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n optimized 00:19:29.520 11:42:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:30.457 11:42:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:30.457 11:42:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:30.457 11:42:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.457 11:42:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:30.716 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.717 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:30.717 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:30.717 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.976 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.976 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:30.976 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:30.976 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.235 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.494 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.494 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:31.494 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.494 11:42:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:31.753 11:42:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.753 11:42:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:31.753 11:42:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:32.013 11:42:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n optimized 00:19:32.013 11:42:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:33.391 11:42:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.650 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.650 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:33.650 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.650 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:33.910 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.910 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:33.910 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.910 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:34.169 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.169 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:34.169 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.169 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:34.429 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.429 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:34.429 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:34.429 11:42:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n non_optimized 00:19:34.688 11:42:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:35.627 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:35.627 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:35.627 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.627 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:35.886 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.886 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:35.886 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.886 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:36.146 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.146 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:36.146 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.146 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.405 11:42:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:36.716 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.716 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:36.716 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.716 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:37.027 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.027 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:37.027 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 -n non_optimized 00:19:37.027 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4421 -n inaccessible 00:19:37.335 11:42:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:38.271 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:38.271 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:38.271 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.271 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:38.530 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.530 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:38.530 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:38.530 11:42:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.789 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:39.048 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.048 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:39.048 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.048 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:39.306 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.306 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:39.306 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.306 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1684771 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1684771 ']' 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1684771 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684771 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684771' 00:19:39.565 killing process with pid 1684771 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1684771 00:19:39.565 11:42:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1684771 00:19:39.565 { 00:19:39.565 "results": [ 00:19:39.565 { 00:19:39.565 "job": "Nvme0n1", 00:19:39.565 "core_mask": "0x4", 00:19:39.565 "workload": "verify", 00:19:39.565 "status": "terminated", 00:19:39.565 "verify_range": { 00:19:39.565 "start": 0, 00:19:39.565 "length": 16384 00:19:39.565 }, 00:19:39.565 "queue_depth": 128, 00:19:39.565 "io_size": 4096, 00:19:39.565 "runtime": 28.106881, 00:19:39.565 "iops": 15831.034400437387, 00:19:39.565 "mibps": 61.839978126708544, 00:19:39.565 "io_failed": 0, 00:19:39.565 "io_timeout": 0, 00:19:39.565 "avg_latency_us": 8066.028018643157, 00:19:39.565 "min_latency_us": 56.76521739130435, 00:19:39.565 "max_latency_us": 3019898.88 00:19:39.565 } 00:19:39.565 ], 00:19:39.565 "core_count": 1 00:19:39.565 } 00:19:39.828 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1684771 00:19:39.828 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.828 [2024-11-20 11:42:13.680323] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:39.828 [2024-11-20 11:42:13.680392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684771 ] 00:19:39.828 [2024-11-20 11:42:13.756385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.828 [2024-11-20 11:42:13.800779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.828 Running I/O for 90 seconds... 00:19:39.828 18176.00 IOPS, 71.00 MiB/s [2024-11-20T10:42:43.308Z] 18304.00 IOPS, 71.50 MiB/s [2024-11-20T10:42:43.308Z] 18389.33 IOPS, 71.83 MiB/s [2024-11-20T10:42:43.308Z] 18432.00 IOPS, 72.00 MiB/s [2024-11-20T10:42:43.308Z] 18457.60 IOPS, 72.10 MiB/s [2024-11-20T10:42:43.308Z] 18453.33 IOPS, 72.08 MiB/s [2024-11-20T10:42:43.308Z] 18474.71 IOPS, 72.17 MiB/s [2024-11-20T10:42:43.308Z] 18472.75 IOPS, 72.16 MiB/s [2024-11-20T10:42:43.308Z] 18464.56 IOPS, 72.13 MiB/s [2024-11-20T10:42:43.308Z] 18455.90 IOPS, 72.09 MiB/s [2024-11-20T10:42:43.308Z] 18456.45 IOPS, 72.10 MiB/s [2024-11-20T10:42:43.308Z] 18454.42 IOPS, 72.09 MiB/s [2024-11-20T10:42:43.308Z] [2024-11-20 11:42:27.239502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.239982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.239991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.828 [2024-11-20 11:42:27.240134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.828 [2024-11-20 11:42:27.240143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.829 [2024-11-20 11:42:27.240185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.829 [2024-11-20 11:42:27.240895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x1be300 00:19:39.829 [2024-11-20 11:42:27.240905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.240917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.240926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.240937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.240948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.241980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.241989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x1be300 00:19:39.830 [2024-11-20 11:42:27.242264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.830 [2024-11-20 11:42:27.242279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.831 [2024-11-20 11:42:27.242855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.242879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.242903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.242929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.242954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.242978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.242993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x1be300 00:19:39.831 [2024-11-20 11:42:27.243211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.831 [2024-11-20 11:42:27.243227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:27.243237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:27.243252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:27.243262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.832 17745.62 IOPS, 69.32 MiB/s [2024-11-20T10:42:43.312Z] 16478.07 IOPS, 64.37 MiB/s [2024-11-20T10:42:43.312Z] 15379.53 IOPS, 60.08 MiB/s [2024-11-20T10:42:43.312Z] 14990.31 IOPS, 58.56 MiB/s [2024-11-20T10:42:43.312Z] 15198.47 IOPS, 59.37 MiB/s [2024-11-20T10:42:43.312Z] 15365.89 IOPS, 60.02 MiB/s [2024-11-20T10:42:43.312Z] 15348.74 IOPS, 59.96 MiB/s [2024-11-20T10:42:43.312Z] 15331.65 IOPS, 59.89 MiB/s [2024-11-20T10:42:43.312Z] 15400.81 IOPS, 60.16 MiB/s [2024-11-20T10:42:43.312Z] 15545.23 IOPS, 60.72 MiB/s [2024-11-20T10:42:43.312Z] 15673.30 IOPS, 61.22 MiB/s [2024-11-20T10:42:43.312Z] 15682.38 IOPS, 61.26 MiB/s [2024-11-20T10:42:43.312Z] 15650.88 IOPS, 61.14 MiB/s [2024-11-20T10:42:43.312Z] [2024-11-20 11:42:40.609458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.609501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.832 [2024-11-20 11:42:40.610631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x1be300 00:19:39.832 [2024-11-20 11:42:40.610652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.832 [2024-11-20 11:42:40.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.610672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.610693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.610714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.610737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.610758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.610779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.610922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.610943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.610964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.610985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.610997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x1be300 00:19:39.833 [2024-11-20 11:42:40.611492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.833 [2024-11-20 11:42:40.611504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.833 [2024-11-20 11:42:40.611514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.834 [2024-11-20 11:42:40.611525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x1be300 00:19:39.834 [2024-11-20 11:42:40.611534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.834 [2024-11-20 11:42:40.611546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.834 [2024-11-20 11:42:40.611555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.834 15634.38 IOPS, 61.07 MiB/s [2024-11-20T10:42:43.314Z] 15734.07 IOPS, 61.46 MiB/s [2024-11-20T10:42:43.314Z] 15826.46 IOPS, 61.82 MiB/s [2024-11-20T10:42:43.314Z] Received shutdown signal, test time was about 28.107537 seconds 00:19:39.834 00:19:39.834 Latency(us) 00:19:39.834 [2024-11-20T10:42:43.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.834 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.834 Verification LBA range: start 0x0 length 0x4000 00:19:39.834 Nvme0n1 : 28.11 15831.03 61.84 0.00 0.00 8066.03 56.77 3019898.88 00:19:39.834 [2024-11-20T10:42:43.314Z] =================================================================================================================== 00:19:39.834 [2024-11-20T10:42:43.314Z] Total : 15831.03 61.84 0.00 0.00 8066.03 56.77 3019898.88 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:39.834 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:19:39.834 rmmod nvme_rdma 00:19:40.093 rmmod nvme_fabrics 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 1684526 ']' 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 1684526 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1684526 ']' 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1684526 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684526 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684526' 00:19:40.093 killing process with pid 1684526 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1684526 00:19:40.093 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1684526 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:19:40.352 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-save 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-restore 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:40.353 00:19:40.353 real 0m38.333s 00:19:40.353 user 1m49.904s 00:19:40.353 sys 0m9.052s 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:40.353 ************************************ 00:19:40.353 END TEST nvmf_host_multipath_status 00:19:40.353 ************************************ 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.353 ************************************ 00:19:40.353 START TEST nvmf_identify_kernel_target 00:19:40.353 ************************************ 00:19:40.353 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:40.613 * Looking for test storage... 00:19:40.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.613 --rc genhtml_branch_coverage=1 00:19:40.613 --rc genhtml_function_coverage=1 00:19:40.613 --rc genhtml_legend=1 00:19:40.613 --rc geninfo_all_blocks=1 00:19:40.613 --rc geninfo_unexecuted_blocks=1 00:19:40.613 00:19:40.613 ' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.613 --rc genhtml_branch_coverage=1 00:19:40.613 --rc genhtml_function_coverage=1 00:19:40.613 --rc genhtml_legend=1 00:19:40.613 --rc geninfo_all_blocks=1 00:19:40.613 --rc geninfo_unexecuted_blocks=1 00:19:40.613 00:19:40.613 ' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.613 --rc genhtml_branch_coverage=1 00:19:40.613 --rc genhtml_function_coverage=1 00:19:40.613 --rc genhtml_legend=1 00:19:40.613 --rc geninfo_all_blocks=1 00:19:40.613 --rc geninfo_unexecuted_blocks=1 00:19:40.613 00:19:40.613 ' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.613 --rc genhtml_branch_coverage=1 00:19:40.613 --rc genhtml_function_coverage=1 00:19:40.613 --rc genhtml_legend=1 00:19:40.613 --rc geninfo_all_blocks=1 00:19:40.613 --rc geninfo_unexecuted_blocks=1 00:19:40.613 00:19:40.613 ' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.613 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:40.614 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:19:40.614 11:42:43 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.184 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:47.185 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:47.185 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:47.185 Found net devices under 0000:18:00.0: mlx_0_0 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:47.185 Found net devices under 0000:18:00.1: mlx_0_1 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # get_rdma_if_list 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # rdma_devs=() 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@89 -- # continue 2 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@89 -- # continue 2 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:19:47.185 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@61 -- # uname 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_cm 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_core 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_umad 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe iw_cm 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # key_initiator=target1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:19:47.186 10.0.0.1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:19:47.186 10.0.0.2 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:47.186 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:47.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:19:47.187 00:19:47.187 --- 10.0.0.2 ping statistics --- 00:19:47.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.187 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:47.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:19:47.187 00:19:47.187 --- 10.0.0.2 ping statistics --- 00:19:47.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.187 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:47.187 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.2 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:19:47.188 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:47.189 11:42:50 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:50.471 Waiting for block devices as requested 00:19:50.471 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:19:50.471 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:50.471 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:50.471 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:50.730 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:50.730 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:50.730 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:50.730 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:50.989 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:50.989 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:50.989 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:51.248 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:51.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:51.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:51.507 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:51.507 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:51.507 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:51.766 No valid GPT data, bailing 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.2 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo rdma 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:51.766 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -a 10.0.0.2 -t rdma -s 4420 00:19:52.027 00:19:52.027 Discovery Log Number of Records 2, Generation counter 2 00:19:52.027 =====Discovery Log Entry 0====== 00:19:52.027 trtype: rdma 00:19:52.027 adrfam: ipv4 00:19:52.027 subtype: current discovery subsystem 00:19:52.027 treq: not specified, sq flow control disable supported 00:19:52.027 portid: 1 00:19:52.027 trsvcid: 4420 00:19:52.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:52.028 traddr: 10.0.0.2 00:19:52.028 eflags: none 00:19:52.028 rdma_prtype: not specified 00:19:52.028 rdma_qptype: connected 00:19:52.028 rdma_cms: rdma-cm 00:19:52.028 rdma_pkey: 0x0000 00:19:52.028 =====Discovery Log Entry 1====== 00:19:52.028 trtype: rdma 00:19:52.028 adrfam: ipv4 00:19:52.028 subtype: nvme subsystem 00:19:52.028 treq: not specified, sq flow control disable supported 00:19:52.028 portid: 1 00:19:52.028 trsvcid: 4420 00:19:52.028 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:52.028 traddr: 10.0.0.2 00:19:52.028 eflags: none 00:19:52.028 rdma_prtype: not specified 00:19:52.028 rdma_qptype: connected 00:19:52.028 rdma_cms: rdma-cm 00:19:52.028 rdma_pkey: 0x0000 00:19:52.028 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:10.0.0.2 00:19:52.028 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:52.028 ===================================================== 00:19:52.028 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:52.028 ===================================================== 00:19:52.028 Controller Capabilities/Features 00:19:52.028 ================================ 00:19:52.028 Vendor ID: 0000 00:19:52.028 Subsystem Vendor ID: 0000 00:19:52.028 Serial Number: 3ba44eed9d25c176cff8 00:19:52.028 Model Number: Linux 00:19:52.028 Firmware Version: 6.8.9-20 00:19:52.028 Recommended Arb Burst: 0 00:19:52.028 IEEE OUI Identifier: 00 00 00 00:19:52.028 Multi-path I/O 00:19:52.028 May have multiple subsystem ports: No 00:19:52.028 May have multiple controllers: No 00:19:52.028 Associated with SR-IOV VF: No 00:19:52.028 Max Data Transfer Size: Unlimited 00:19:52.028 Max Number of Namespaces: 0 00:19:52.028 Max Number of I/O Queues: 1024 00:19:52.028 NVMe Specification Version (VS): 1.3 00:19:52.028 NVMe Specification Version (Identify): 1.3 00:19:52.028 Maximum Queue Entries: 128 00:19:52.028 Contiguous Queues Required: No 00:19:52.028 Arbitration Mechanisms Supported 00:19:52.028 Weighted Round Robin: Not Supported 00:19:52.028 Vendor Specific: Not Supported 00:19:52.028 Reset Timeout: 7500 ms 00:19:52.028 Doorbell Stride: 4 bytes 00:19:52.028 NVM Subsystem Reset: Not Supported 00:19:52.028 Command Sets Supported 00:19:52.028 NVM Command Set: Supported 00:19:52.028 Boot Partition: Not Supported 00:19:52.028 Memory Page Size Minimum: 4096 bytes 00:19:52.028 Memory Page Size Maximum: 4096 bytes 00:19:52.028 Persistent Memory Region: Not Supported 00:19:52.028 Optional Asynchronous Events Supported 00:19:52.028 Namespace Attribute Notices: Not Supported 00:19:52.028 Firmware Activation Notices: Not Supported 00:19:52.028 ANA Change Notices: Not Supported 00:19:52.028 PLE Aggregate Log Change Notices: Not Supported 00:19:52.028 LBA Status Info Alert Notices: Not Supported 00:19:52.028 EGE Aggregate Log Change Notices: Not Supported 00:19:52.028 Normal NVM Subsystem Shutdown event: Not Supported 00:19:52.028 Zone Descriptor Change Notices: Not Supported 00:19:52.028 Discovery Log Change Notices: Supported 00:19:52.028 Controller Attributes 00:19:52.028 128-bit Host Identifier: Not Supported 00:19:52.028 Non-Operational Permissive Mode: Not Supported 00:19:52.028 NVM Sets: Not Supported 00:19:52.028 Read Recovery Levels: Not Supported 00:19:52.028 Endurance Groups: Not Supported 00:19:52.028 Predictable Latency Mode: Not Supported 00:19:52.028 Traffic Based Keep ALive: Not Supported 00:19:52.028 Namespace Granularity: Not Supported 00:19:52.028 SQ Associations: Not Supported 00:19:52.028 UUID List: Not Supported 00:19:52.028 Multi-Domain Subsystem: Not Supported 00:19:52.028 Fixed Capacity Management: Not Supported 00:19:52.028 Variable Capacity Management: Not Supported 00:19:52.028 Delete Endurance Group: Not Supported 00:19:52.028 Delete NVM Set: Not Supported 00:19:52.028 Extended LBA Formats Supported: Not Supported 00:19:52.028 Flexible Data Placement Supported: Not Supported 00:19:52.028 00:19:52.028 Controller Memory Buffer Support 00:19:52.028 ================================ 00:19:52.028 Supported: No 00:19:52.028 00:19:52.028 Persistent Memory Region Support 00:19:52.028 ================================ 00:19:52.028 Supported: No 00:19:52.028 00:19:52.028 Admin Command Set Attributes 00:19:52.028 ============================ 00:19:52.028 Security Send/Receive: Not Supported 00:19:52.028 Format NVM: Not Supported 00:19:52.028 Firmware Activate/Download: Not Supported 00:19:52.028 Namespace Management: Not Supported 00:19:52.028 Device Self-Test: Not Supported 00:19:52.028 Directives: Not Supported 00:19:52.028 NVMe-MI: Not Supported 00:19:52.028 Virtualization Management: Not Supported 00:19:52.028 Doorbell Buffer Config: Not Supported 00:19:52.028 Get LBA Status Capability: Not Supported 00:19:52.028 Command & Feature Lockdown Capability: Not Supported 00:19:52.028 Abort Command Limit: 1 00:19:52.028 Async Event Request Limit: 1 00:19:52.028 Number of Firmware Slots: N/A 00:19:52.028 Firmware Slot 1 Read-Only: N/A 00:19:52.028 Firmware Activation Without Reset: N/A 00:19:52.028 Multiple Update Detection Support: N/A 00:19:52.028 Firmware Update Granularity: No Information Provided 00:19:52.028 Per-Namespace SMART Log: No 00:19:52.028 Asymmetric Namespace Access Log Page: Not Supported 00:19:52.028 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:52.028 Command Effects Log Page: Not Supported 00:19:52.028 Get Log Page Extended Data: Supported 00:19:52.028 Telemetry Log Pages: Not Supported 00:19:52.028 Persistent Event Log Pages: Not Supported 00:19:52.028 Supported Log Pages Log Page: May Support 00:19:52.028 Commands Supported & Effects Log Page: Not Supported 00:19:52.028 Feature Identifiers & Effects Log Page:May Support 00:19:52.028 NVMe-MI Commands & Effects Log Page: May Support 00:19:52.028 Data Area 4 for Telemetry Log: Not Supported 00:19:52.028 Error Log Page Entries Supported: 1 00:19:52.028 Keep Alive: Not Supported 00:19:52.028 00:19:52.028 NVM Command Set Attributes 00:19:52.028 ========================== 00:19:52.028 Submission Queue Entry Size 00:19:52.028 Max: 1 00:19:52.028 Min: 1 00:19:52.028 Completion Queue Entry Size 00:19:52.028 Max: 1 00:19:52.028 Min: 1 00:19:52.028 Number of Namespaces: 0 00:19:52.028 Compare Command: Not Supported 00:19:52.028 Write Uncorrectable Command: Not Supported 00:19:52.028 Dataset Management Command: Not Supported 00:19:52.028 Write Zeroes Command: Not Supported 00:19:52.028 Set Features Save Field: Not Supported 00:19:52.028 Reservations: Not Supported 00:19:52.028 Timestamp: Not Supported 00:19:52.028 Copy: Not Supported 00:19:52.028 Volatile Write Cache: Not Present 00:19:52.028 Atomic Write Unit (Normal): 1 00:19:52.028 Atomic Write Unit (PFail): 1 00:19:52.028 Atomic Compare & Write Unit: 1 00:19:52.028 Fused Compare & Write: Not Supported 00:19:52.028 Scatter-Gather List 00:19:52.028 SGL Command Set: Supported 00:19:52.028 SGL Keyed: Supported 00:19:52.028 SGL Bit Bucket Descriptor: Not Supported 00:19:52.028 SGL Metadata Pointer: Not Supported 00:19:52.028 Oversized SGL: Not Supported 00:19:52.028 SGL Metadata Address: Not Supported 00:19:52.028 SGL Offset: Supported 00:19:52.028 Transport SGL Data Block: Not Supported 00:19:52.028 Replay Protected Memory Block: Not Supported 00:19:52.028 00:19:52.028 Firmware Slot Information 00:19:52.028 ========================= 00:19:52.028 Active slot: 0 00:19:52.028 00:19:52.028 00:19:52.028 Error Log 00:19:52.028 ========= 00:19:52.028 00:19:52.029 Active Namespaces 00:19:52.029 ================= 00:19:52.029 Discovery Log Page 00:19:52.029 ================== 00:19:52.029 Generation Counter: 2 00:19:52.029 Number of Records: 2 00:19:52.029 Record Format: 0 00:19:52.029 00:19:52.029 Discovery Log Entry 0 00:19:52.029 ---------------------- 00:19:52.029 Transport Type: 1 (RDMA) 00:19:52.029 Address Family: 1 (IPv4) 00:19:52.029 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:52.029 Entry Flags: 00:19:52.029 Duplicate Returned Information: 0 00:19:52.029 Explicit Persistent Connection Support for Discovery: 0 00:19:52.029 Transport Requirements: 00:19:52.029 Secure Channel: Not Specified 00:19:52.029 Port ID: 1 (0x0001) 00:19:52.029 Controller ID: 65535 (0xffff) 00:19:52.029 Admin Max SQ Size: 32 00:19:52.029 Transport Service Identifier: 4420 00:19:52.029 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:52.029 Transport Address: 10.0.0.2 00:19:52.029 Transport Specific Address Subtype - RDMA 00:19:52.029 RDMA QP Service Type: 1 (Reliable Connected) 00:19:52.029 RDMA Provider Type: 1 (No provider specified) 00:19:52.029 RDMA CM Service: 1 (RDMA_CM) 00:19:52.029 Discovery Log Entry 1 00:19:52.029 ---------------------- 00:19:52.029 Transport Type: 1 (RDMA) 00:19:52.029 Address Family: 1 (IPv4) 00:19:52.029 Subsystem Type: 2 (NVM Subsystem) 00:19:52.029 Entry Flags: 00:19:52.029 Duplicate Returned Information: 0 00:19:52.029 Explicit Persistent Connection Support for Discovery: 0 00:19:52.029 Transport Requirements: 00:19:52.029 Secure Channel: Not Specified 00:19:52.029 Port ID: 1 (0x0001) 00:19:52.029 Controller ID: 65535 (0xffff) 00:19:52.029 Admin Max SQ Size: 32 00:19:52.029 Transport Service Identifier: 4420 00:19:52.029 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:52.029 Transport Address: 10.0.0.2 00:19:52.029 Transport Specific Address Subtype - RDMA 00:19:52.029 RDMA QP Service Type: 1 (Reliable Connected) 00:19:52.029 RDMA Provider Type: 1 (No provider specified) 00:19:52.029 RDMA CM Service: 1 (RDMA_CM) 00:19:52.029 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:52.290 get_feature(0x01) failed 00:19:52.290 get_feature(0x02) failed 00:19:52.290 get_feature(0x04) failed 00:19:52.290 ===================================================== 00:19:52.290 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:19:52.290 ===================================================== 00:19:52.290 Controller Capabilities/Features 00:19:52.290 ================================ 00:19:52.290 Vendor ID: 0000 00:19:52.290 Subsystem Vendor ID: 0000 00:19:52.290 Serial Number: 1a70ac64d6b710df00fc 00:19:52.290 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:52.290 Firmware Version: 6.8.9-20 00:19:52.290 Recommended Arb Burst: 6 00:19:52.290 IEEE OUI Identifier: 00 00 00 00:19:52.290 Multi-path I/O 00:19:52.290 May have multiple subsystem ports: Yes 00:19:52.290 May have multiple controllers: Yes 00:19:52.290 Associated with SR-IOV VF: No 00:19:52.290 Max Data Transfer Size: 1048576 00:19:52.290 Max Number of Namespaces: 1024 00:19:52.290 Max Number of I/O Queues: 128 00:19:52.290 NVMe Specification Version (VS): 1.3 00:19:52.290 NVMe Specification Version (Identify): 1.3 00:19:52.290 Maximum Queue Entries: 128 00:19:52.290 Contiguous Queues Required: No 00:19:52.290 Arbitration Mechanisms Supported 00:19:52.290 Weighted Round Robin: Not Supported 00:19:52.290 Vendor Specific: Not Supported 00:19:52.290 Reset Timeout: 7500 ms 00:19:52.290 Doorbell Stride: 4 bytes 00:19:52.290 NVM Subsystem Reset: Not Supported 00:19:52.290 Command Sets Supported 00:19:52.290 NVM Command Set: Supported 00:19:52.290 Boot Partition: Not Supported 00:19:52.290 Memory Page Size Minimum: 4096 bytes 00:19:52.290 Memory Page Size Maximum: 4096 bytes 00:19:52.290 Persistent Memory Region: Not Supported 00:19:52.290 Optional Asynchronous Events Supported 00:19:52.290 Namespace Attribute Notices: Supported 00:19:52.290 Firmware Activation Notices: Not Supported 00:19:52.290 ANA Change Notices: Supported 00:19:52.290 PLE Aggregate Log Change Notices: Not Supported 00:19:52.290 LBA Status Info Alert Notices: Not Supported 00:19:52.290 EGE Aggregate Log Change Notices: Not Supported 00:19:52.290 Normal NVM Subsystem Shutdown event: Not Supported 00:19:52.290 Zone Descriptor Change Notices: Not Supported 00:19:52.290 Discovery Log Change Notices: Not Supported 00:19:52.290 Controller Attributes 00:19:52.290 128-bit Host Identifier: Supported 00:19:52.290 Non-Operational Permissive Mode: Not Supported 00:19:52.290 NVM Sets: Not Supported 00:19:52.290 Read Recovery Levels: Not Supported 00:19:52.290 Endurance Groups: Not Supported 00:19:52.290 Predictable Latency Mode: Not Supported 00:19:52.290 Traffic Based Keep ALive: Supported 00:19:52.290 Namespace Granularity: Not Supported 00:19:52.290 SQ Associations: Not Supported 00:19:52.290 UUID List: Not Supported 00:19:52.290 Multi-Domain Subsystem: Not Supported 00:19:52.290 Fixed Capacity Management: Not Supported 00:19:52.290 Variable Capacity Management: Not Supported 00:19:52.290 Delete Endurance Group: Not Supported 00:19:52.290 Delete NVM Set: Not Supported 00:19:52.290 Extended LBA Formats Supported: Not Supported 00:19:52.290 Flexible Data Placement Supported: Not Supported 00:19:52.290 00:19:52.290 Controller Memory Buffer Support 00:19:52.290 ================================ 00:19:52.290 Supported: No 00:19:52.290 00:19:52.290 Persistent Memory Region Support 00:19:52.290 ================================ 00:19:52.290 Supported: No 00:19:52.290 00:19:52.290 Admin Command Set Attributes 00:19:52.290 ============================ 00:19:52.290 Security Send/Receive: Not Supported 00:19:52.291 Format NVM: Not Supported 00:19:52.291 Firmware Activate/Download: Not Supported 00:19:52.291 Namespace Management: Not Supported 00:19:52.291 Device Self-Test: Not Supported 00:19:52.291 Directives: Not Supported 00:19:52.291 NVMe-MI: Not Supported 00:19:52.291 Virtualization Management: Not Supported 00:19:52.291 Doorbell Buffer Config: Not Supported 00:19:52.291 Get LBA Status Capability: Not Supported 00:19:52.291 Command & Feature Lockdown Capability: Not Supported 00:19:52.291 Abort Command Limit: 4 00:19:52.291 Async Event Request Limit: 4 00:19:52.291 Number of Firmware Slots: N/A 00:19:52.291 Firmware Slot 1 Read-Only: N/A 00:19:52.291 Firmware Activation Without Reset: N/A 00:19:52.291 Multiple Update Detection Support: N/A 00:19:52.291 Firmware Update Granularity: No Information Provided 00:19:52.291 Per-Namespace SMART Log: Yes 00:19:52.291 Asymmetric Namespace Access Log Page: Supported 00:19:52.291 ANA Transition Time : 10 sec 00:19:52.291 00:19:52.291 Asymmetric Namespace Access Capabilities 00:19:52.291 ANA Optimized State : Supported 00:19:52.291 ANA Non-Optimized State : Supported 00:19:52.291 ANA Inaccessible State : Supported 00:19:52.291 ANA Persistent Loss State : Supported 00:19:52.291 ANA Change State : Supported 00:19:52.291 ANAGRPID is not changed : No 00:19:52.291 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:52.291 00:19:52.291 ANA Group Identifier Maximum : 128 00:19:52.291 Number of ANA Group Identifiers : 128 00:19:52.291 Max Number of Allowed Namespaces : 1024 00:19:52.291 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:52.291 Command Effects Log Page: Supported 00:19:52.291 Get Log Page Extended Data: Supported 00:19:52.291 Telemetry Log Pages: Not Supported 00:19:52.291 Persistent Event Log Pages: Not Supported 00:19:52.291 Supported Log Pages Log Page: May Support 00:19:52.291 Commands Supported & Effects Log Page: Not Supported 00:19:52.291 Feature Identifiers & Effects Log Page:May Support 00:19:52.291 NVMe-MI Commands & Effects Log Page: May Support 00:19:52.291 Data Area 4 for Telemetry Log: Not Supported 00:19:52.291 Error Log Page Entries Supported: 128 00:19:52.291 Keep Alive: Supported 00:19:52.291 Keep Alive Granularity: 1000 ms 00:19:52.291 00:19:52.291 NVM Command Set Attributes 00:19:52.291 ========================== 00:19:52.291 Submission Queue Entry Size 00:19:52.291 Max: 64 00:19:52.291 Min: 64 00:19:52.291 Completion Queue Entry Size 00:19:52.291 Max: 16 00:19:52.291 Min: 16 00:19:52.291 Number of Namespaces: 1024 00:19:52.291 Compare Command: Not Supported 00:19:52.291 Write Uncorrectable Command: Not Supported 00:19:52.291 Dataset Management Command: Supported 00:19:52.291 Write Zeroes Command: Supported 00:19:52.291 Set Features Save Field: Not Supported 00:19:52.291 Reservations: Not Supported 00:19:52.291 Timestamp: Not Supported 00:19:52.291 Copy: Not Supported 00:19:52.291 Volatile Write Cache: Present 00:19:52.291 Atomic Write Unit (Normal): 1 00:19:52.291 Atomic Write Unit (PFail): 1 00:19:52.291 Atomic Compare & Write Unit: 1 00:19:52.291 Fused Compare & Write: Not Supported 00:19:52.291 Scatter-Gather List 00:19:52.291 SGL Command Set: Supported 00:19:52.291 SGL Keyed: Supported 00:19:52.291 SGL Bit Bucket Descriptor: Not Supported 00:19:52.291 SGL Metadata Pointer: Not Supported 00:19:52.291 Oversized SGL: Not Supported 00:19:52.291 SGL Metadata Address: Not Supported 00:19:52.291 SGL Offset: Supported 00:19:52.291 Transport SGL Data Block: Not Supported 00:19:52.291 Replay Protected Memory Block: Not Supported 00:19:52.291 00:19:52.291 Firmware Slot Information 00:19:52.291 ========================= 00:19:52.291 Active slot: 0 00:19:52.291 00:19:52.291 Asymmetric Namespace Access 00:19:52.291 =========================== 00:19:52.291 Change Count : 0 00:19:52.291 Number of ANA Group Descriptors : 1 00:19:52.291 ANA Group Descriptor : 0 00:19:52.291 ANA Group ID : 1 00:19:52.291 Number of NSID Values : 1 00:19:52.291 Change Count : 0 00:19:52.291 ANA State : 1 00:19:52.291 Namespace Identifier : 1 00:19:52.291 00:19:52.291 Commands Supported and Effects 00:19:52.291 ============================== 00:19:52.291 Admin Commands 00:19:52.291 -------------- 00:19:52.291 Get Log Page (02h): Supported 00:19:52.291 Identify (06h): Supported 00:19:52.291 Abort (08h): Supported 00:19:52.291 Set Features (09h): Supported 00:19:52.291 Get Features (0Ah): Supported 00:19:52.291 Asynchronous Event Request (0Ch): Supported 00:19:52.291 Keep Alive (18h): Supported 00:19:52.291 I/O Commands 00:19:52.291 ------------ 00:19:52.291 Flush (00h): Supported 00:19:52.291 Write (01h): Supported LBA-Change 00:19:52.291 Read (02h): Supported 00:19:52.291 Write Zeroes (08h): Supported LBA-Change 00:19:52.291 Dataset Management (09h): Supported 00:19:52.291 00:19:52.291 Error Log 00:19:52.291 ========= 00:19:52.291 Entry: 0 00:19:52.291 Error Count: 0x3 00:19:52.291 Submission Queue Id: 0x0 00:19:52.291 Command Id: 0x5 00:19:52.291 Phase Bit: 0 00:19:52.291 Status Code: 0x2 00:19:52.291 Status Code Type: 0x0 00:19:52.291 Do Not Retry: 1 00:19:52.291 Error Location: 0x28 00:19:52.291 LBA: 0x0 00:19:52.291 Namespace: 0x0 00:19:52.291 Vendor Log Page: 0x0 00:19:52.291 ----------- 00:19:52.291 Entry: 1 00:19:52.291 Error Count: 0x2 00:19:52.291 Submission Queue Id: 0x0 00:19:52.291 Command Id: 0x5 00:19:52.291 Phase Bit: 0 00:19:52.291 Status Code: 0x2 00:19:52.291 Status Code Type: 0x0 00:19:52.291 Do Not Retry: 1 00:19:52.291 Error Location: 0x28 00:19:52.291 LBA: 0x0 00:19:52.291 Namespace: 0x0 00:19:52.291 Vendor Log Page: 0x0 00:19:52.291 ----------- 00:19:52.291 Entry: 2 00:19:52.291 Error Count: 0x1 00:19:52.291 Submission Queue Id: 0x0 00:19:52.291 Command Id: 0x0 00:19:52.291 Phase Bit: 0 00:19:52.291 Status Code: 0x2 00:19:52.291 Status Code Type: 0x0 00:19:52.291 Do Not Retry: 1 00:19:52.291 Error Location: 0x28 00:19:52.291 LBA: 0x0 00:19:52.291 Namespace: 0x0 00:19:52.291 Vendor Log Page: 0x0 00:19:52.291 00:19:52.291 Number of Queues 00:19:52.291 ================ 00:19:52.291 Number of I/O Submission Queues: 128 00:19:52.291 Number of I/O Completion Queues: 128 00:19:52.291 00:19:52.291 ZNS Specific Controller Data 00:19:52.291 ============================ 00:19:52.291 Zone Append Size Limit: 0 00:19:52.291 00:19:52.291 00:19:52.291 Active Namespaces 00:19:52.291 ================= 00:19:52.291 get_feature(0x05) failed 00:19:52.291 Namespace ID:1 00:19:52.291 Command Set Identifier: NVM (00h) 00:19:52.291 Deallocate: Supported 00:19:52.291 Deallocated/Unwritten Error: Not Supported 00:19:52.291 Deallocated Read Value: Unknown 00:19:52.291 Deallocate in Write Zeroes: Not Supported 00:19:52.291 Deallocated Guard Field: 0xFFFF 00:19:52.291 Flush: Supported 00:19:52.291 Reservation: Not Supported 00:19:52.291 Namespace Sharing Capabilities: Multiple Controllers 00:19:52.291 Size (in LBAs): 7814037168 (3726GiB) 00:19:52.291 Capacity (in LBAs): 7814037168 (3726GiB) 00:19:52.291 Utilization (in LBAs): 7814037168 (3726GiB) 00:19:52.291 UUID: 865b88d1-e8ef-4096-adca-a6b6eced67fd 00:19:52.291 Thin Provisioning: Not Supported 00:19:52.291 Per-NS Atomic Units: Yes 00:19:52.291 Atomic Boundary Size (Normal): 0 00:19:52.291 Atomic Boundary Size (PFail): 0 00:19:52.291 Atomic Boundary Offset: 0 00:19:52.291 NGUID/EUI64 Never Reused: No 00:19:52.291 ANA group ID: 1 00:19:52.291 Namespace Write Protected: No 00:19:52.291 Number of LBA Formats: 1 00:19:52.291 Current LBA Format: LBA Format #00 00:19:52.291 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:52.291 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:19:52.291 rmmod nvme_rdma 00:19:52.291 rmmod nvme_fabrics 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:52.291 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-save 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-restore 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:19:52.292 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_rdma nvmet 00:19:52.551 11:42:55 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:55.834 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:55.834 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:59.124 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:19:59.124 00:19:59.124 real 0m18.605s 00:19:59.124 user 0m4.833s 00:19:59.124 sys 0m9.938s 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.124 ************************************ 00:19:59.124 END TEST nvmf_identify_kernel_target 00:19:59.124 ************************************ 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.124 ************************************ 00:19:59.124 START TEST nvmf_auth_host 00:19:59.124 ************************************ 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:59.124 * Looking for test storage... 00:19:59.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:59.124 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.385 --rc genhtml_branch_coverage=1 00:19:59.385 --rc genhtml_function_coverage=1 00:19:59.385 --rc genhtml_legend=1 00:19:59.385 --rc geninfo_all_blocks=1 00:19:59.385 --rc geninfo_unexecuted_blocks=1 00:19:59.385 00:19:59.385 ' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.385 --rc genhtml_branch_coverage=1 00:19:59.385 --rc genhtml_function_coverage=1 00:19:59.385 --rc genhtml_legend=1 00:19:59.385 --rc geninfo_all_blocks=1 00:19:59.385 --rc geninfo_unexecuted_blocks=1 00:19:59.385 00:19:59.385 ' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.385 --rc genhtml_branch_coverage=1 00:19:59.385 --rc genhtml_function_coverage=1 00:19:59.385 --rc genhtml_legend=1 00:19:59.385 --rc geninfo_all_blocks=1 00:19:59.385 --rc geninfo_unexecuted_blocks=1 00:19:59.385 00:19:59.385 ' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:59.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.385 --rc genhtml_branch_coverage=1 00:19:59.385 --rc genhtml_function_coverage=1 00:19:59.385 --rc genhtml_legend=1 00:19:59.385 --rc geninfo_all_blocks=1 00:19:59.385 --rc geninfo_unexecuted_blocks=1 00:19:59.385 00:19:59.385 ' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:59.385 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:59.385 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:19:59.386 11:43:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:05.955 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:05.955 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:05.955 Found net devices under 0000:18:00.0: mlx_0_0 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:20:05.955 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:05.956 Found net devices under 0000:18:00.1: mlx_0_1 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # get_rdma_if_list 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # rdma_devs=() 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:20:05.956 11:43:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@89 -- # continue 2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@89 -- # continue 2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@61 -- # uname 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_cm 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_core 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_umad 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe iw_cm 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # key_initiator=target1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:20:05.956 10.0.0.1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:20:05.956 10.0.0.2 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.956 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:05.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:20:05.957 00:20:05.957 --- 10.0.0.2 ping statistics --- 00:20:05.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.957 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:05.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.020 ms 00:20:05.957 00:20:05.957 --- 10.0.0.2 ping statistics --- 00:20:05.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.957 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:05.957 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=1697441 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 1697441 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1697441 ']' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.958 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=23d168521c038bf63d24d9230e4a4a5d 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.6MK 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 23d168521c038bf63d24d9230e4a4a5d 0 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 23d168521c038bf63d24d9230e4a4a5d 0 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=23d168521c038bf63d24d9230e4a4a5d 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.6MK 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.6MK 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6MK 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.216 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=7d237c3a30820e980eab6412b76e89675037628eb1417fa8a6312c65f521430e 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.bDh 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 7d237c3a30820e980eab6412b76e89675037628eb1417fa8a6312c65f521430e 3 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 7d237c3a30820e980eab6412b76e89675037628eb1417fa8a6312c65f521430e 3 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=7d237c3a30820e980eab6412b76e89675037628eb1417fa8a6312c65f521430e 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:20:06.217 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.bDh 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.bDh 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.bDh 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=a4a900dc8f69beba121d47b4ebd42a075cae86d9c96aba5b 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.0HO 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key a4a900dc8f69beba121d47b4ebd42a075cae86d9c96aba5b 0 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 a4a900dc8f69beba121d47b4ebd42a075cae86d9c96aba5b 0 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=a4a900dc8f69beba121d47b4ebd42a075cae86d9c96aba5b 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.0HO 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.0HO 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.0HO 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=fcb4a714c33459595a3c14bece68377559b528cfcf5b3718 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.mvF 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key fcb4a714c33459595a3c14bece68377559b528cfcf5b3718 2 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 fcb4a714c33459595a3c14bece68377559b528cfcf5b3718 2 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.476 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=fcb4a714c33459595a3c14bece68377559b528cfcf5b3718 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.mvF 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.mvF 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mvF 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5f90246cf62774859a89c8015ed2c5d9 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.Ctb 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5f90246cf62774859a89c8015ed2c5d9 1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5f90246cf62774859a89c8015ed2c5d9 1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5f90246cf62774859a89c8015ed2c5d9 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.Ctb 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.Ctb 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ctb 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=96f059a3e20e0c06cd4737a8f5ae27bf 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.fYA 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 96f059a3e20e0c06cd4737a8f5ae27bf 1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 96f059a3e20e0c06cd4737a8f5ae27bf 1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=96f059a3e20e0c06cd4737a8f5ae27bf 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:20:06.477 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.fYA 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.fYA 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fYA 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=728e6c4b129f5fda630af40eb383a978aa5bcd222cab1e6d 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.pbR 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 728e6c4b129f5fda630af40eb383a978aa5bcd222cab1e6d 2 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 728e6c4b129f5fda630af40eb383a978aa5bcd222cab1e6d 2 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=728e6c4b129f5fda630af40eb383a978aa5bcd222cab1e6d 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:20:06.736 11:43:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.736 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.pbR 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.pbR 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.pbR 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=ddda74855a1bb84408bb084e3abe667e 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.F4o 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key ddda74855a1bb84408bb084e3abe667e 0 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 ddda74855a1bb84408bb084e3abe667e 0 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=ddda74855a1bb84408bb084e3abe667e 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.F4o 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.F4o 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.F4o 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=cc368d884268e2953b0652d1440b32cc93aa8617481933947bbb31fceaa92ee0 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.E6s 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key cc368d884268e2953b0652d1440b32cc93aa8617481933947bbb31fceaa92ee0 3 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 cc368d884268e2953b0652d1440b32cc93aa8617481933947bbb31fceaa92ee0 3 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=cc368d884268e2953b0652d1440b32cc93aa8617481933947bbb31fceaa92ee0 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.E6s 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.E6s 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.E6s 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1697441 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1697441 ']' 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.737 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6MK 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.bDh ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bDh 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.0HO 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mvF ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mvF 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ctb 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fYA ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fYA 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.pbR 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.F4o ]] 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.F4o 00:20:06.996 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.E6s 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.2 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.2 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:06.997 11:43:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:10.292 Waiting for block devices as requested 00:20:10.292 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:20:10.292 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:10.292 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:10.292 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:10.292 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:10.551 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:10.552 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:10.552 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:10.864 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:10.864 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:10.864 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:10.864 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:11.198 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:11.198 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:11.198 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:11.198 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:11.470 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:12.039 No valid GPT data, bailing 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.2 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo rdma 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:12.039 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 --hostid=0049fda6-1adc-e711-906e-0017a4403562 -a 10.0.0.2 -t rdma -s 4420 00:20:12.299 00:20:12.299 Discovery Log Number of Records 2, Generation counter 2 00:20:12.299 =====Discovery Log Entry 0====== 00:20:12.299 trtype: rdma 00:20:12.299 adrfam: ipv4 00:20:12.299 subtype: current discovery subsystem 00:20:12.299 treq: not specified, sq flow control disable supported 00:20:12.299 portid: 1 00:20:12.299 trsvcid: 4420 00:20:12.299 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:12.299 traddr: 10.0.0.2 00:20:12.299 eflags: none 00:20:12.299 rdma_prtype: not specified 00:20:12.299 rdma_qptype: connected 00:20:12.299 rdma_cms: rdma-cm 00:20:12.299 rdma_pkey: 0x0000 00:20:12.299 =====Discovery Log Entry 1====== 00:20:12.299 trtype: rdma 00:20:12.299 adrfam: ipv4 00:20:12.299 subtype: nvme subsystem 00:20:12.299 treq: not specified, sq flow control disable supported 00:20:12.299 portid: 1 00:20:12.299 trsvcid: 4420 00:20:12.299 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:12.299 traddr: 10.0.0.2 00:20:12.299 eflags: none 00:20:12.299 rdma_prtype: not specified 00:20:12.299 rdma_qptype: connected 00:20:12.299 rdma_cms: rdma-cm 00:20:12.299 rdma_pkey: 0x0000 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.299 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 nvme0n1 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.559 11:43:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.819 nvme0n1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.819 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 nvme0n1 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.078 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.079 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.079 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.338 nvme0n1 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.338 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.339 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 nvme0n1 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.598 11:43:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.859 nvme0n1 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.859 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 nvme0n1 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.119 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.120 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.379 nvme0n1 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.379 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.380 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.639 nvme0n1 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.640 11:43:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.640 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.899 nvme0n1 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:14.899 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.900 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.159 nvme0n1 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.159 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.160 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.419 nvme0n1 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.419 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.678 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.678 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.678 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.678 11:43:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.937 nvme0n1 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.937 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.196 nvme0n1 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.196 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.455 nvme0n1 00:20:16.455 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.455 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.456 11:43:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.715 nvme0n1 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.715 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.974 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.234 nvme0n1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.234 11:43:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.801 nvme0n1 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.801 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 nvme0n1 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.060 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.628 nvme0n1 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.628 11:43:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.888 nvme0n1 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.888 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.146 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.146 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.147 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.715 nvme0n1 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.715 11:43:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.715 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.284 nvme0n1 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.284 11:43:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.853 nvme0n1 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.853 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.421 nvme0n1 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.421 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.680 11:43:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.248 nvme0n1 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.248 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 nvme0n1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.508 11:43:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.767 nvme0n1 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.767 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.768 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 nvme0n1 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.027 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.286 nvme0n1 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.286 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.546 nvme0n1 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.546 11:43:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.805 nvme0n1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.805 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.064 nvme0n1 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.064 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.065 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.065 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 nvme0n1 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.324 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 nvme0n1 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.584 11:43:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.844 nvme0n1 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.844 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.845 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.845 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.845 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.845 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.845 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.104 nvme0n1 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.104 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.363 nvme0n1 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.363 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.622 11:43:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 nvme0n1 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.882 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.141 nvme0n1 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.141 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.142 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 nvme0n1 00:20:26.400 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.400 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.400 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.400 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.401 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.660 11:43:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.920 nvme0n1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.920 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.490 nvme0n1 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.490 11:43:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.750 nvme0n1 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.750 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.009 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.268 nvme0n1 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.268 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.269 11:43:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 nvme0n1 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.836 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.403 nvme0n1 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.403 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.404 11:43:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.972 nvme0n1 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.972 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.540 nvme0n1 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.540 11:43:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.800 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 nvme0n1 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.369 11:43:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.937 nvme0n1 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:31.937 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.938 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.197 nvme0n1 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.197 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.198 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.457 nvme0n1 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:32.457 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.458 11:43:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.717 nvme0n1 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.717 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.977 nvme0n1 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.977 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.978 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.237 nvme0n1 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.237 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.238 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.238 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.238 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.497 nvme0n1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.497 11:43:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.756 nvme0n1 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.756 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.757 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.016 nvme0n1 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.016 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.276 nvme0n1 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.276 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.535 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.536 11:43:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.794 nvme0n1 00:20:34.794 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.794 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.795 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.054 nvme0n1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.054 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.314 nvme0n1 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.314 11:43:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.573 nvme0n1 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.573 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.832 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.091 nvme0n1 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.091 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.351 nvme0n1 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.351 11:43:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.920 nvme0n1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.920 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.179 nvme0n1 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.179 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.439 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.440 11:43:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 nvme0n1 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.699 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.700 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.268 nvme0n1 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.268 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.527 nvme0n1 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.527 11:43:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjNkMTY4NTIxYzAzOGJmNjNkMjRkOTIzMGU0YTRhNWSrbpKA: 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: ]] 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2QyMzdjM2EzMDgyMGU5ODBlYWI2NDEyYjc2ZTg5Njc1MDM3NjI4ZWIxNDE3ZmE4YTYzMTJjNjVmNTIxNDMwZaZRxL8=: 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.786 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.787 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.787 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.787 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.355 nvme0n1 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:39.355 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.356 11:43:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.926 nvme0n1 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.926 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.495 nvme0n1 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.495 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI4ZTZjNGIxMjlmNWZkYTYzMGFmNDBlYjM4M2E5NzhhYTViY2QyMjJjYWIxZTZkcRE6og==: 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: ]] 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRkYTc0ODU1YTFiYjg0NDA4YmIwODRlM2FiZTY2N2WeNCN2: 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.754 11:43:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 nvme0n1 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNjhkODg0MjY4ZTI5NTNiMDY1MmQxNDQwYjMyY2M5M2FhODYxNzQ4MTkzMzk0N2JiYjMxZmNlYWE5MmVlMMsY/i8=: 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.323 11:43:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 nvme0n1 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 request: 00:20:42.049 { 00:20:42.049 "name": "nvme0", 00:20:42.049 "trtype": "rdma", 00:20:42.049 "traddr": "10.0.0.2", 00:20:42.049 "adrfam": "ipv4", 00:20:42.049 "trsvcid": "4420", 00:20:42.049 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:42.049 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:42.049 "prchk_reftag": false, 00:20:42.049 "prchk_guard": false, 00:20:42.049 "hdgst": false, 00:20:42.049 "ddgst": false, 00:20:42.049 "allow_unrecognized_csi": false, 00:20:42.049 "method": "bdev_nvme_attach_controller", 00:20:42.049 "req_id": 1 00:20:42.049 } 00:20:42.049 Got JSON-RPC error response 00:20:42.049 response: 00:20:42.049 { 00:20:42.049 "code": -5, 00:20:42.049 "message": "Input/output error" 00:20:42.049 } 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.049 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.049 request: 00:20:42.049 { 00:20:42.049 "name": "nvme0", 00:20:42.049 "trtype": "rdma", 00:20:42.049 "traddr": "10.0.0.2", 00:20:42.049 "adrfam": "ipv4", 00:20:42.049 "trsvcid": "4420", 00:20:42.049 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:42.049 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:42.049 "prchk_reftag": false, 00:20:42.049 "prchk_guard": false, 00:20:42.049 "hdgst": false, 00:20:42.049 "ddgst": false, 00:20:42.049 "dhchap_key": "key2", 00:20:42.049 "allow_unrecognized_csi": false, 00:20:42.049 "method": "bdev_nvme_attach_controller", 00:20:42.049 "req_id": 1 00:20:42.049 } 00:20:42.049 Got JSON-RPC error response 00:20:42.049 response: 00:20:42.049 { 00:20:42.049 "code": -5, 00:20:42.049 "message": "Input/output error" 00:20:42.309 } 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.309 request: 00:20:42.309 { 00:20:42.309 "name": "nvme0", 00:20:42.309 "trtype": "rdma", 00:20:42.309 "traddr": "10.0.0.2", 00:20:42.309 "adrfam": "ipv4", 00:20:42.309 "trsvcid": "4420", 00:20:42.309 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:42.309 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:42.309 "prchk_reftag": false, 00:20:42.309 "prchk_guard": false, 00:20:42.309 "hdgst": false, 00:20:42.309 "ddgst": false, 00:20:42.309 "dhchap_key": "key1", 00:20:42.309 "dhchap_ctrlr_key": "ckey2", 00:20:42.309 "allow_unrecognized_csi": false, 00:20:42.309 "method": "bdev_nvme_attach_controller", 00:20:42.309 "req_id": 1 00:20:42.309 } 00:20:42.309 Got JSON-RPC error response 00:20:42.309 response: 00:20:42.309 { 00:20:42.309 "code": -5, 00:20:42.309 "message": "Input/output error" 00:20:42.309 } 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.309 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.569 nvme0n1 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.569 request: 00:20:42.569 { 00:20:42.569 "name": "nvme0", 00:20:42.569 "dhchap_key": "key1", 00:20:42.569 "dhchap_ctrlr_key": "ckey2", 00:20:42.569 "method": "bdev_nvme_set_keys", 00:20:42.569 "req_id": 1 00:20:42.569 } 00:20:42.569 Got JSON-RPC error response 00:20:42.569 response: 00:20:42.569 { 00:20:42.569 "code": -13, 00:20:42.569 "message": "Permission denied" 00:20:42.569 } 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:42.569 11:43:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:43.943 11:43:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:44.876 11:43:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTRhOTAwZGM4ZjY5YmViYTEyMWQ0N2I0ZWJkNDJhMDc1Y2FlODZkOWM5NmFiYTViAslEWQ==: 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: ]] 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNGE3MTRjMzM0NTk1OTVhM2MxNGJlY2U2ODM3NzU1OWI1MjhjZmNmNWIzNzE49bG9Ww==: 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.811 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.069 nvme0n1 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY5MDI0NmNmNjI3NzQ4NTlhODljODAxNWVkMmM1ZDlqOafU: 00:20:46.069 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: ]] 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZmMDU5YTNlMjBlMGMwNmNkNDczN2E4ZjVhZTI3YmajjYg3: 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.070 request: 00:20:46.070 { 00:20:46.070 "name": "nvme0", 00:20:46.070 "dhchap_key": "key2", 00:20:46.070 "dhchap_ctrlr_key": "ckey1", 00:20:46.070 "method": "bdev_nvme_set_keys", 00:20:46.070 "req_id": 1 00:20:46.070 } 00:20:46.070 Got JSON-RPC error response 00:20:46.070 response: 00:20:46.070 { 00:20:46.070 "code": -13, 00:20:46.070 "message": "Permission denied" 00:20:46.070 } 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:46.070 11:43:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:47.005 11:43:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:48.381 11:43:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:20:49.317 rmmod nvme_rdma 00:20:49.317 rmmod nvme_fabrics 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 1697441 ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 1697441 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1697441 ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1697441 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1697441 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1697441' 00:20:49.317 killing process with pid 1697441 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1697441 00:20:49.317 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1697441 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:20:49.575 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-save 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-restore 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_rdma nvmet 00:20:49.576 11:43:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:52.108 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:52.108 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:55.396 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:20:55.396 11:43:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6MK /tmp/spdk.key-null.0HO /tmp/spdk.key-sha256.Ctb /tmp/spdk.key-sha384.pbR /tmp/spdk.key-sha512.E6s /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:20:55.396 11:43:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:58.687 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:58.687 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:58.687 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:58.687 00:20:58.687 real 0m59.533s 00:20:58.687 user 0m46.672s 00:20:58.687 sys 0m14.304s 00:20:58.687 11:44:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.687 11:44:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.687 ************************************ 00:20:58.687 END TEST nvmf_auth_host 00:20:58.687 ************************************ 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.687 ************************************ 00:20:58.687 START TEST nvmf_bdevperf 00:20:58.687 ************************************ 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:58.687 * Looking for test storage... 00:20:58.687 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.687 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.946 --rc genhtml_branch_coverage=1 00:20:58.946 --rc genhtml_function_coverage=1 00:20:58.946 --rc genhtml_legend=1 00:20:58.946 --rc geninfo_all_blocks=1 00:20:58.946 --rc geninfo_unexecuted_blocks=1 00:20:58.946 00:20:58.946 ' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.946 --rc genhtml_branch_coverage=1 00:20:58.946 --rc genhtml_function_coverage=1 00:20:58.946 --rc genhtml_legend=1 00:20:58.946 --rc geninfo_all_blocks=1 00:20:58.946 --rc geninfo_unexecuted_blocks=1 00:20:58.946 00:20:58.946 ' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.946 --rc genhtml_branch_coverage=1 00:20:58.946 --rc genhtml_function_coverage=1 00:20:58.946 --rc genhtml_legend=1 00:20:58.946 --rc geninfo_all_blocks=1 00:20:58.946 --rc geninfo_unexecuted_blocks=1 00:20:58.946 00:20:58.946 ' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.946 --rc genhtml_branch_coverage=1 00:20:58.946 --rc genhtml_function_coverage=1 00:20:58.946 --rc genhtml_legend=1 00:20:58.946 --rc geninfo_all_blocks=1 00:20:58.946 --rc geninfo_unexecuted_blocks=1 00:20:58.946 00:20:58.946 ' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.946 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:58.947 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:20:58.947 11:44:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:05.517 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:05.517 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:05.517 Found net devices under 0000:18:00.0: mlx_0_0 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:05.517 Found net devices under 0000:18:00.1: mlx_0_1 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # get_rdma_if_list 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # rdma_devs=() 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@89 -- # continue 2 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@89 -- # continue 2 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@61 -- # uname 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_cm 00:21:05.517 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_core 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_umad 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe iw_cm 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@58 -- # key_initiator=target1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:21:05.518 10.0.0.1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:05.518 10.0.0.2 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:05.518 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:21:05.778 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:05.778 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:05.778 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:05.778 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:05.778 11:44:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:05.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:21:05.778 00:21:05.778 --- 10.0.0.2 ping statistics --- 00:21:05.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.778 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:05.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:21:05.778 00:21:05.778 --- 10.0.0.2 ping statistics --- 00:21:05.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.778 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:05.778 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1709717 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1709717 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1709717 ']' 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.779 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:05.779 [2024-11-20 11:44:09.160502] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:05.779 [2024-11-20 11:44:09.160559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.780 [2024-11-20 11:44:09.234599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.039 [2024-11-20 11:44:09.283601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.039 [2024-11-20 11:44:09.283636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.039 [2024-11-20 11:44:09.283645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.039 [2024-11-20 11:44:09.283670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.039 [2024-11-20 11:44:09.283678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.039 [2024-11-20 11:44:09.284813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.039 [2024-11-20 11:44:09.284890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.039 [2024-11-20 11:44:09.284892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.039 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.039 [2024-11-20 11:44:09.449612] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b239f0/0x1b27ee0) succeed. 00:21:06.039 [2024-11-20 11:44:09.458539] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b24fe0/0x1b69580) succeed. 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 Malloc0 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 [2024-11-20 11:44:09.600004] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:06.298 { 00:21:06.298 "params": { 00:21:06.298 "name": "Nvme$subsystem", 00:21:06.298 "trtype": "$TEST_TRANSPORT", 00:21:06.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.298 "adrfam": "ipv4", 00:21:06.298 "trsvcid": "$NVMF_PORT", 00:21:06.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.298 "hdgst": ${hdgst:-false}, 00:21:06.298 "ddgst": ${ddgst:-false} 00:21:06.298 }, 00:21:06.298 "method": "bdev_nvme_attach_controller" 00:21:06.298 } 00:21:06.298 EOF 00:21:06.298 )") 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:21:06.298 11:44:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:06.298 "params": { 00:21:06.298 "name": "Nvme1", 00:21:06.298 "trtype": "rdma", 00:21:06.298 "traddr": "10.0.0.2", 00:21:06.298 "adrfam": "ipv4", 00:21:06.298 "trsvcid": "4420", 00:21:06.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.298 "hdgst": false, 00:21:06.298 "ddgst": false 00:21:06.298 }, 00:21:06.298 "method": "bdev_nvme_attach_controller" 00:21:06.298 }' 00:21:06.298 [2024-11-20 11:44:09.649705] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:06.298 [2024-11-20 11:44:09.649760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709755 ] 00:21:06.298 [2024-11-20 11:44:09.729557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.558 [2024-11-20 11:44:09.775636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.558 Running I/O for 1 seconds... 00:21:07.751 17732.00 IOPS, 69.27 MiB/s 00:21:07.751 Latency(us) 00:21:07.751 [2024-11-20T10:44:11.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:07.751 Verification LBA range: start 0x0 length 0x4000 00:21:07.751 Nvme1n1 : 1.01 17762.54 69.38 0.00 0.00 7167.35 2564.45 11169.61 00:21:07.751 [2024-11-20T10:44:11.231Z] =================================================================================================================== 00:21:07.751 [2024-11-20T10:44:11.231Z] Total : 17762.54 69.38 0.00 0.00 7167.35 2564.45 11169.61 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1710000 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:07.751 { 00:21:07.751 "params": { 00:21:07.751 "name": "Nvme$subsystem", 00:21:07.751 "trtype": "$TEST_TRANSPORT", 00:21:07.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.751 "adrfam": "ipv4", 00:21:07.751 "trsvcid": "$NVMF_PORT", 00:21:07.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.751 "hdgst": ${hdgst:-false}, 00:21:07.751 "ddgst": ${ddgst:-false} 00:21:07.751 }, 00:21:07.751 "method": "bdev_nvme_attach_controller" 00:21:07.751 } 00:21:07.751 EOF 00:21:07.751 )") 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:21:07.751 11:44:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:07.751 "params": { 00:21:07.751 "name": "Nvme1", 00:21:07.751 "trtype": "rdma", 00:21:07.751 "traddr": "10.0.0.2", 00:21:07.751 "adrfam": "ipv4", 00:21:07.751 "trsvcid": "4420", 00:21:07.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.751 "hdgst": false, 00:21:07.751 "ddgst": false 00:21:07.751 }, 00:21:07.751 "method": "bdev_nvme_attach_controller" 00:21:07.751 }' 00:21:07.751 [2024-11-20 11:44:11.204169] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:07.751 [2024-11-20 11:44:11.204231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710000 ] 00:21:08.010 [2024-11-20 11:44:11.283900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.010 [2024-11-20 11:44:11.328418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.268 Running I/O for 15 seconds... 00:21:10.139 18018.00 IOPS, 70.38 MiB/s [2024-11-20T10:44:14.186Z] 18081.00 IOPS, 70.63 MiB/s [2024-11-20T10:44:14.186Z] 11:44:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1709717 00:21:10.706 11:44:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:21:11.844 16000.00 IOPS, 62.50 MiB/s [2024-11-20T10:44:15.324Z] [2024-11-20 11:44:15.196436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.844 [2024-11-20 11:44:15.196477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:edc0 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.196490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.844 [2024-11-20 11:44:15.196500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:edc0 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.196509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.844 [2024-11-20 11:44:15.196518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:edc0 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.196528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.844 [2024-11-20 11:44:15.196536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:edc0 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:11.844 [2024-11-20 11:44:15.198290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:11.844 [2024-11-20 11:44:15.198314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.844 [2024-11-20 11:44:15.198837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.844 [2024-11-20 11:44:15.198847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.198878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.198919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.198929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.198961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.198971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:11.845 [2024-11-20 11:44:15.199634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.199961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.199992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.845 [2024-11-20 11:44:15.200370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x1bf100 00:21:11.845 [2024-11-20 11:44:15.200379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.200980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.200993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x1bf100 00:21:11.846 [2024-11-20 11:44:15.201855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.846 [2024-11-20 11:44:15.201885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.201895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.201927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.201938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.201969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.201979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.202965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.202996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.847 [2024-11-20 11:44:15.203373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x1bf100 00:21:11.847 [2024-11-20 11:44:15.203384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.203416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x1bf100 00:21:11.848 [2024-11-20 11:44:15.203428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x1bf100 00:21:11.848 [2024-11-20 11:44:15.203470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.203501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x1bf100 00:21:11.848 [2024-11-20 11:44:15.203510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.203541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x1bf100 00:21:11.848 [2024-11-20 11:44:15.203551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ec264000 sqhd:7250 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.217975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:11.848 [2024-11-20 11:44:15.217995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:11.848 [2024-11-20 11:44:15.218005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122616 len:8 PRP1 0x0 PRP2 0x0 00:21:11.848 [2024-11-20 11:44:15.218036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.848 [2024-11-20 11:44:15.218114] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:21:11.848 [2024-11-20 11:44:15.218157] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:21:11.848 [2024-11-20 11:44:15.220938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:11.848 [2024-11-20 11:44:15.224359] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:11.848 [2024-11-20 11:44:15.224382] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:11.848 [2024-11-20 11:44:15.224390] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:13.041 12000.00 IOPS, 46.88 MiB/s [2024-11-20T10:44:16.521Z] [2024-11-20 11:44:16.228281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:13.041 [2024-11-20 11:44:16.228304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:13.041 [2024-11-20 11:44:16.228496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:13.041 [2024-11-20 11:44:16.228512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:13.041 [2024-11-20 11:44:16.228523] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:13.041 [2024-11-20 11:44:16.228535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:13.041 [2024-11-20 11:44:16.234792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:13.041 [2024-11-20 11:44:16.239202] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:13.041 [2024-11-20 11:44:16.239260] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:13.041 [2024-11-20 11:44:16.239287] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:13.864 9600.00 IOPS, 37.50 MiB/s [2024-11-20T10:44:17.344Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1709717 Killed "${NVMF_APP[@]}" "$@" 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1710847 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1710847 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1710847 ']' 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.864 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:13.865 [2024-11-20 11:44:17.230185] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:13.865 [2024-11-20 11:44:17.230235] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.865 [2024-11-20 11:44:17.243390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:13.865 [2024-11-20 11:44:17.243422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:13.865 [2024-11-20 11:44:17.243603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:13.865 [2024-11-20 11:44:17.243615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:13.865 [2024-11-20 11:44:17.243627] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:13.865 [2024-11-20 11:44:17.243640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:13.865 [2024-11-20 11:44:17.248023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:13.865 [2024-11-20 11:44:17.250622] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:13.865 [2024-11-20 11:44:17.250644] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:13.865 [2024-11-20 11:44:17.250653] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:21:13.865 [2024-11-20 11:44:17.312815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.123 [2024-11-20 11:44:17.362145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.123 [2024-11-20 11:44:17.362184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.123 [2024-11-20 11:44:17.362195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.123 [2024-11-20 11:44:17.362219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.123 [2024-11-20 11:44:17.362227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.123 [2024-11-20 11:44:17.363494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.123 [2024-11-20 11:44:17.363571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.123 [2024-11-20 11:44:17.363572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.123 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.123 8000.00 IOPS, 31.25 MiB/s [2024-11-20T10:44:17.603Z] [2024-11-20 11:44:17.544495] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7379f0/0x73bee0) succeed. 00:21:14.123 [2024-11-20 11:44:17.553651] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x738fe0/0x77d580) succeed. 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.382 Malloc0 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:14.382 [2024-11-20 11:44:17.696441] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.382 11:44:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1710000 00:21:14.947 [2024-11-20 11:44:18.254581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:14.947 [2024-11-20 11:44:18.254613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:14.947 [2024-11-20 11:44:18.254794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:14.947 [2024-11-20 11:44:18.254805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:14.947 [2024-11-20 11:44:18.254816] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:21:14.947 [2024-11-20 11:44:18.254829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:14.947 [2024-11-20 11:44:18.263920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:14.947 [2024-11-20 11:44:18.300427] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:21:16.141 7424.57 IOPS, 29.00 MiB/s [2024-11-20T10:44:20.556Z] 8742.12 IOPS, 34.15 MiB/s [2024-11-20T10:44:21.929Z] 9760.00 IOPS, 38.12 MiB/s [2024-11-20T10:44:22.889Z] 10586.10 IOPS, 41.35 MiB/s [2024-11-20T10:44:23.859Z] 11256.82 IOPS, 43.97 MiB/s [2024-11-20T10:44:24.793Z] 11820.67 IOPS, 46.17 MiB/s [2024-11-20T10:44:25.727Z] 12294.85 IOPS, 48.03 MiB/s [2024-11-20T10:44:26.661Z] 12702.93 IOPS, 49.62 MiB/s [2024-11-20T10:44:26.661Z] 13057.20 IOPS, 51.00 MiB/s 00:21:23.181 Latency(us) 00:21:23.181 [2024-11-20T10:44:26.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:23.181 Verification LBA range: start 0x0 length 0x4000 00:21:23.181 Nvme1n1 : 15.01 13056.39 51.00 10284.53 0.00 5462.95 443.44 1057694.05 00:21:23.181 [2024-11-20T10:44:26.661Z] =================================================================================================================== 00:21:23.181 [2024-11-20T10:44:26.661Z] Total : 13056.39 51.00 10284.53 0.00 5462.95 443.44 1057694.05 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:21:23.440 rmmod nvme_rdma 00:21:23.440 rmmod nvme_fabrics 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 1710847 ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 1710847 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1710847 ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1710847 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1710847 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1710847' 00:21:23.440 killing process with pid 1710847 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1710847 00:21:23.440 11:44:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1710847 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@264 -- # local dev 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # return 0 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@284 -- # iptr 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-save 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-restore 00:21:23.699 00:21:23.699 real 0m25.116s 00:21:23.699 user 1m2.470s 00:21:23.699 sys 0m6.427s 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.699 11:44:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:23.699 ************************************ 00:21:23.699 END TEST nvmf_bdevperf 00:21:23.699 ************************************ 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.958 ************************************ 00:21:23.958 START TEST nvmf_target_disconnect 00:21:23.958 ************************************ 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:23.958 * Looking for test storage... 00:21:23.958 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:21:23.958 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.217 --rc genhtml_branch_coverage=1 00:21:24.217 --rc genhtml_function_coverage=1 00:21:24.217 --rc genhtml_legend=1 00:21:24.217 --rc geninfo_all_blocks=1 00:21:24.217 --rc geninfo_unexecuted_blocks=1 00:21:24.217 00:21:24.217 ' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.217 --rc genhtml_branch_coverage=1 00:21:24.217 --rc genhtml_function_coverage=1 00:21:24.217 --rc genhtml_legend=1 00:21:24.217 --rc geninfo_all_blocks=1 00:21:24.217 --rc geninfo_unexecuted_blocks=1 00:21:24.217 00:21:24.217 ' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.217 --rc genhtml_branch_coverage=1 00:21:24.217 --rc genhtml_function_coverage=1 00:21:24.217 --rc genhtml_legend=1 00:21:24.217 --rc geninfo_all_blocks=1 00:21:24.217 --rc geninfo_unexecuted_blocks=1 00:21:24.217 00:21:24.217 ' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.217 --rc genhtml_branch_coverage=1 00:21:24.217 --rc genhtml_function_coverage=1 00:21:24.217 --rc genhtml_legend=1 00:21:24.217 --rc geninfo_all_blocks=1 00:21:24.217 --rc geninfo_unexecuted_blocks=1 00:21:24.217 00:21:24.217 ' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.217 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:24.218 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:21:24.218 11:44:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:21:29.487 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:29.488 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:29.488 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:29.488 Found net devices under 0000:18:00.0: mlx_0_0 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:29.488 Found net devices under 0000:18:00.1: mlx_0_1 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # get_rdma_if_list 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # rdma_devs=() 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@89 -- # continue 2 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@89 -- # continue 2 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@61 -- # uname 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_cm 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_core 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_umad 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe iw_cm 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:21:29.488 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@58 -- # key_initiator=target1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:29.489 10.0.0.1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:21:29.489 10.0.0.2 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:29.489 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:29.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:21:29.749 00:21:29.749 --- 10.0.0.2 ping statistics --- 00:21:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.749 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:29.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:21:29.749 00:21:29.749 --- 10.0.0.2 ping statistics --- 00:21:29.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.749 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:21:29.749 11:44:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:21:29.749 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:29.750 ************************************ 00:21:29.750 START TEST nvmf_target_disconnect_tc1 00:21:29.750 ************************************ 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:21:29.750 11:44:33 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.008 [2024-11-20 11:44:33.271347] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:30.008 [2024-11-20 11:44:33.271392] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:30.008 [2024-11-20 11:44:33.271403] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:21:30.944 [2024-11-20 11:44:34.275504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:21:30.944 [2024-11-20 11:44:34.275585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:21:30.944 [2024-11-20 11:44:34.275620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:21:30.945 [2024-11-20 11:44:34.275690] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:30.945 [2024-11-20 11:44:34.275721] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:21:30.945 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:21:30.945 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:30.945 Initializing NVMe Controllers 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.945 00:21:30.945 real 0m1.147s 00:21:30.945 user 0m0.884s 00:21:30.945 sys 0m0.252s 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:30.945 ************************************ 00:21:30.945 END TEST nvmf_target_disconnect_tc1 00:21:30.945 ************************************ 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:30.945 ************************************ 00:21:30.945 START TEST nvmf_target_disconnect_tc2 00:21:30.945 ************************************ 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1715081 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1715081 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1715081 ']' 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.945 11:44:34 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:31.203 [2024-11-20 11:44:34.424472] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:31.203 [2024-11-20 11:44:34.424525] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.203 [2024-11-20 11:44:34.513846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.203 [2024-11-20 11:44:34.565270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.203 [2024-11-20 11:44:34.565309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.203 [2024-11-20 11:44:34.565319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.203 [2024-11-20 11:44:34.565328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.203 [2024-11-20 11:44:34.565335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.203 [2024-11-20 11:44:34.566828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:31.203 [2024-11-20 11:44:34.567037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:31.203 [2024-11-20 11:44:34.566929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:31.203 [2024-11-20 11:44:34.567055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.139 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.139 Malloc0 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.140 [2024-11-20 11:44:35.371152] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a9c570/0x1aa8040) succeed. 00:21:32.140 [2024-11-20 11:44:35.380849] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a9dc00/0x1ae96e0) succeed. 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.140 [2024-11-20 11:44:35.523740] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1715283 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:21:32.140 11:44:35 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:34.673 11:44:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1715081 00:21:34.673 11:44:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Write completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 Read completed with error (sct=0, sc=8) 00:21:35.608 starting I/O failed 00:21:35.608 [2024-11-20 11:44:38.727696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.608 [2024-11-20 11:44:38.729285] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:35.608 [2024-11-20 11:44:38.729305] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:35.608 [2024-11-20 11:44:38.729315] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:36.177 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1715081 Killed "${NVMF_APP[@]}" "$@" 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1715828 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1715828 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1715828 ']' 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.177 11:44:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.177 [2024-11-20 11:44:39.607463] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:36.177 [2024-11-20 11:44:39.607530] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.437 [2024-11-20 11:44:39.699788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.437 [2024-11-20 11:44:39.733149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:36.437 qpair failed and we were unable to recover it. 00:21:36.437 [2024-11-20 11:44:39.734617] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:36.437 [2024-11-20 11:44:39.734636] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:36.437 [2024-11-20 11:44:39.734646] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:36.437 [2024-11-20 11:44:39.744794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.437 [2024-11-20 11:44:39.744830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.437 [2024-11-20 11:44:39.744839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.437 [2024-11-20 11:44:39.744849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.437 [2024-11-20 11:44:39.744856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.437 [2024-11-20 11:44:39.746283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:36.437 [2024-11-20 11:44:39.746383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:36.437 [2024-11-20 11:44:39.746483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:36.437 [2024-11-20 11:44:39.746484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:37.003 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.003 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:37.003 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:37.003 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.003 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 Malloc0 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 [2024-11-20 11:44:40.556454] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f75570/0x1f81040) succeed. 00:21:37.263 [2024-11-20 11:44:40.566068] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f76c00/0x1fc26e0) succeed. 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.2 -s 4420 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 [2024-11-20 11:44:40.708399] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.2 -s 4420 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.263 11:44:40 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1715283 00:21:37.263 [2024-11-20 11:44:40.738643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:37.263 qpair failed and we were unable to recover it. 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Write completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 Read completed with error (sct=0, sc=8) 00:21:38.644 starting I/O failed 00:21:38.644 [2024-11-20 11:44:41.743648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 [2024-11-20 11:44:41.749273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.749336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.749357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.749368] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.749378] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.759371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.769295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.769348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.769367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.769378] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.769387] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.779575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.789381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.789430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.789448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.789457] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.789466] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.799419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.809284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.809329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.809347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.809356] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.809365] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.819349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.829467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.829509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.829527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.829536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.829545] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.839694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.849358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.849404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.849422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.644 [2024-11-20 11:44:41.849431] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.644 [2024-11-20 11:44:41.849440] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.644 [2024-11-20 11:44:41.859627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.644 qpair failed and we were unable to recover it. 00:21:38.644 [2024-11-20 11:44:41.869584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.644 [2024-11-20 11:44:41.869630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.644 [2024-11-20 11:44:41.869651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.869662] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.869670] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.879814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.889587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.889633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.889652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.889662] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.889671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.899751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.909643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.909686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.909704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.909714] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.909724] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.919973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.929641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.929682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.929700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.929709] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.929718] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.939884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.949782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.949822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.949840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.949853] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.949862] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.959993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.969762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.969809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.969827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.969837] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.969845] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:41.979867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:41.989900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:41.989951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:41.989968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:41.989978] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:41.989987] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.000109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.009870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.009914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.009932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.009941] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.009951] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.020320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.029985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.030023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.030051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.030061] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.030070] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.040255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.050096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.050144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.050162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.050172] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.050181] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.060503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.070186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.070235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.070253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.070263] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.070272] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.080524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.090203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.090244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.090261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.090271] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.090280] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.645 [2024-11-20 11:44:42.100491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.645 qpair failed and we were unable to recover it. 00:21:38.645 [2024-11-20 11:44:42.110306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.645 [2024-11-20 11:44:42.110352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.645 [2024-11-20 11:44:42.110369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.645 [2024-11-20 11:44:42.110379] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.645 [2024-11-20 11:44:42.110388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.120552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.130258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.130304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.130322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.130331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.130340] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.140586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.150393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.150436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.150454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.150463] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.150472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.160804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.170249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.170290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.170308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.170317] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.170327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.180732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.190476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.190515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.190532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.190542] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.190551] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.200733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.210508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.210552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.210574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.210584] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.210592] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.220784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.230565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.230615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.230633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.230643] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.230652] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.240766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.250810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.250856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.250874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.250883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.250892] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.261192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.270726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.270772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.270790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.270799] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.270809] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.280841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.290822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.905 [2024-11-20 11:44:42.290866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.905 [2024-11-20 11:44:42.290884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.905 [2024-11-20 11:44:42.290893] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.905 [2024-11-20 11:44:42.290906] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.905 [2024-11-20 11:44:42.301082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.905 qpair failed and we were unable to recover it. 00:21:38.905 [2024-11-20 11:44:42.310798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.906 [2024-11-20 11:44:42.310848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.906 [2024-11-20 11:44:42.310866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.906 [2024-11-20 11:44:42.310875] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.906 [2024-11-20 11:44:42.310884] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.906 [2024-11-20 11:44:42.321171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.906 qpair failed and we were unable to recover it. 00:21:38.906 [2024-11-20 11:44:42.330848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.906 [2024-11-20 11:44:42.330894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.906 [2024-11-20 11:44:42.330912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.906 [2024-11-20 11:44:42.330921] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.906 [2024-11-20 11:44:42.330930] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.906 [2024-11-20 11:44:42.341164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.906 qpair failed and we were unable to recover it. 00:21:38.906 [2024-11-20 11:44:42.351007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.906 [2024-11-20 11:44:42.351060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.906 [2024-11-20 11:44:42.351078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.906 [2024-11-20 11:44:42.351088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.906 [2024-11-20 11:44:42.351097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.906 [2024-11-20 11:44:42.361299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.906 qpair failed and we were unable to recover it. 00:21:38.906 [2024-11-20 11:44:42.371016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:38.906 [2024-11-20 11:44:42.371065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:38.906 [2024-11-20 11:44:42.371083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:38.906 [2024-11-20 11:44:42.371093] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:38.906 [2024-11-20 11:44:42.371102] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:38.906 [2024-11-20 11:44:42.381292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:38.906 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.391119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.391163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.391182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.391191] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.391200] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.401495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.411093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.411135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.411153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.411163] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.411172] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.421369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.431261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.431301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.431319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.431328] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.431337] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.441541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.451353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.451395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.451413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.451422] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.451431] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.461466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.471470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.471520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.471540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.471550] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.471559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.481590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.491356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.491405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.491423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.491433] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.491442] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.501674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.511482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.511529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.511546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.511555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.511564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.166 [2024-11-20 11:44:42.521648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.166 qpair failed and we were unable to recover it. 00:21:39.166 [2024-11-20 11:44:42.531562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.166 [2024-11-20 11:44:42.531607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.166 [2024-11-20 11:44:42.531624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.166 [2024-11-20 11:44:42.531634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.166 [2024-11-20 11:44:42.531643] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.541765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.167 [2024-11-20 11:44:42.551736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.167 [2024-11-20 11:44:42.551784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.167 [2024-11-20 11:44:42.551802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.167 [2024-11-20 11:44:42.551814] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.167 [2024-11-20 11:44:42.551823] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.561766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.167 [2024-11-20 11:44:42.571747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.167 [2024-11-20 11:44:42.571793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.167 [2024-11-20 11:44:42.571810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.167 [2024-11-20 11:44:42.571819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.167 [2024-11-20 11:44:42.571828] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.582003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.167 [2024-11-20 11:44:42.591530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.167 [2024-11-20 11:44:42.591570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.167 [2024-11-20 11:44:42.591588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.167 [2024-11-20 11:44:42.591597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.167 [2024-11-20 11:44:42.591607] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.602041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.167 [2024-11-20 11:44:42.611758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.167 [2024-11-20 11:44:42.611806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.167 [2024-11-20 11:44:42.611824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.167 [2024-11-20 11:44:42.611833] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.167 [2024-11-20 11:44:42.611842] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.622053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.167 [2024-11-20 11:44:42.631786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.167 [2024-11-20 11:44:42.631837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.167 [2024-11-20 11:44:42.631854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.167 [2024-11-20 11:44:42.631864] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.167 [2024-11-20 11:44:42.631872] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.167 [2024-11-20 11:44:42.642030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.167 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.651953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.651996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.652013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.652023] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.652043] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.662028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.671997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.672039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.672057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.672067] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.672076] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.682160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.692012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.692060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.692077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.692087] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.692096] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.702166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.712117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.712169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.712187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.712198] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.712207] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.722494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.732015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.732059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.732077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.732087] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.732095] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.742450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.752323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.752364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.752381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.752391] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.752400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.762238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.772102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.772147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.772164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.772174] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.772183] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.782415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.792383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.792432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.792449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.792459] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.792467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.802634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.812442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.812488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.812509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.812519] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.812528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.822801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.427 qpair failed and we were unable to recover it. 00:21:39.427 [2024-11-20 11:44:42.832355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.427 [2024-11-20 11:44:42.832407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.427 [2024-11-20 11:44:42.832426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.427 [2024-11-20 11:44:42.832436] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.427 [2024-11-20 11:44:42.832444] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.427 [2024-11-20 11:44:42.842656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.428 qpair failed and we were unable to recover it. 00:21:39.428 [2024-11-20 11:44:42.852503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.428 [2024-11-20 11:44:42.852548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.428 [2024-11-20 11:44:42.852565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.428 [2024-11-20 11:44:42.852575] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.428 [2024-11-20 11:44:42.852584] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.428 [2024-11-20 11:44:42.862808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.428 qpair failed and we were unable to recover it. 00:21:39.428 [2024-11-20 11:44:42.872514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.428 [2024-11-20 11:44:42.872562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.428 [2024-11-20 11:44:42.872579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.428 [2024-11-20 11:44:42.872589] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.428 [2024-11-20 11:44:42.872598] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.428 [2024-11-20 11:44:42.882701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.428 qpair failed and we were unable to recover it. 00:21:39.428 [2024-11-20 11:44:42.892642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.428 [2024-11-20 11:44:42.892687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.428 [2024-11-20 11:44:42.892704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.428 [2024-11-20 11:44:42.892714] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.428 [2024-11-20 11:44:42.892726] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.428 [2024-11-20 11:44:42.902956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.428 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:42.912490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:42.912528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:42.912545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:42.912554] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:42.912563] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:42.922755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:42.932637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:42.932680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:42.932698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:42.932708] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:42.932717] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:42.942797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:42.952827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:42.952873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:42.952891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:42.952900] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:42.952909] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:42.962987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:42.972719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:42.972767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:42.972785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:42.972795] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:42.972805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:42.983102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:42.992808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:42.992853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:42.992872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:42.992882] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:42.992891] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:43.003113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:43.012962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:43.013004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:43.013021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:43.013031] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:43.013046] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:43.023229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:43.033008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:43.033060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:43.033078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:43.033088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:43.033096] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:43.043193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:43.052992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:43.053038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:43.053056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:43.053065] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:43.053075] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:43.063212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:43.073079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:43.073126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:43.073147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:43.073157] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.688 [2024-11-20 11:44:43.073166] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.688 [2024-11-20 11:44:43.083448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.688 qpair failed and we were unable to recover it. 00:21:39.688 [2024-11-20 11:44:43.093125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.688 [2024-11-20 11:44:43.093172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.688 [2024-11-20 11:44:43.093190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.688 [2024-11-20 11:44:43.093200] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.689 [2024-11-20 11:44:43.093209] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.689 [2024-11-20 11:44:43.103416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.689 qpair failed and we were unable to recover it. 00:21:39.689 [2024-11-20 11:44:43.113134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.689 [2024-11-20 11:44:43.113179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.689 [2024-11-20 11:44:43.113197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.689 [2024-11-20 11:44:43.113207] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.689 [2024-11-20 11:44:43.113216] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.689 [2024-11-20 11:44:43.123443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.689 qpair failed and we were unable to recover it. 00:21:39.689 [2024-11-20 11:44:43.133193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.689 [2024-11-20 11:44:43.133240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.689 [2024-11-20 11:44:43.133257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.689 [2024-11-20 11:44:43.133266] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.689 [2024-11-20 11:44:43.133275] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.689 [2024-11-20 11:44:43.143431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.689 qpair failed and we were unable to recover it. 00:21:39.689 [2024-11-20 11:44:43.153315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.689 [2024-11-20 11:44:43.153359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.689 [2024-11-20 11:44:43.153376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.689 [2024-11-20 11:44:43.153389] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.689 [2024-11-20 11:44:43.153398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.689 [2024-11-20 11:44:43.163546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.689 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.173191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.173238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.173255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.173265] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.173274] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.183649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.193347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.193391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.193409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.193418] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.193427] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.203504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.213347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.213387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.213405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.213415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.213423] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.223449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.233539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.233580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.233598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.233608] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.233616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.243829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.253545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.253592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.253610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.253620] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.253629] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.263807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.273684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.273727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.273744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.273754] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.273763] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.283837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.293594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.293635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.293653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.293663] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.293672] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.303726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.313767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.950 [2024-11-20 11:44:43.313812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.950 [2024-11-20 11:44:43.313829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.950 [2024-11-20 11:44:43.313839] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.950 [2024-11-20 11:44:43.313848] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.950 [2024-11-20 11:44:43.324140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.950 qpair failed and we were unable to recover it. 00:21:39.950 [2024-11-20 11:44:43.333791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.951 [2024-11-20 11:44:43.333838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.951 [2024-11-20 11:44:43.333855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.951 [2024-11-20 11:44:43.333865] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.951 [2024-11-20 11:44:43.333874] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.951 [2024-11-20 11:44:43.344006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.951 qpair failed and we were unable to recover it. 00:21:39.951 [2024-11-20 11:44:43.353828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.951 [2024-11-20 11:44:43.353878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.951 [2024-11-20 11:44:43.353895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.951 [2024-11-20 11:44:43.353905] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.951 [2024-11-20 11:44:43.353914] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.951 [2024-11-20 11:44:43.364295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.951 qpair failed and we were unable to recover it. 00:21:39.951 [2024-11-20 11:44:43.373825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.951 [2024-11-20 11:44:43.373874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.951 [2024-11-20 11:44:43.373893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.951 [2024-11-20 11:44:43.373902] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.951 [2024-11-20 11:44:43.373911] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.951 [2024-11-20 11:44:43.384258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.951 qpair failed and we were unable to recover it. 00:21:39.951 [2024-11-20 11:44:43.393955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.951 [2024-11-20 11:44:43.393999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.951 [2024-11-20 11:44:43.394017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.951 [2024-11-20 11:44:43.394027] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.951 [2024-11-20 11:44:43.394041] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.951 [2024-11-20 11:44:43.404169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.951 qpair failed and we were unable to recover it. 00:21:39.951 [2024-11-20 11:44:43.414093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:39.951 [2024-11-20 11:44:43.414138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:39.951 [2024-11-20 11:44:43.414159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:39.951 [2024-11-20 11:44:43.414168] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:39.951 [2024-11-20 11:44:43.414177] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:39.951 [2024-11-20 11:44:43.424254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:39.951 qpair failed and we were unable to recover it. 00:21:40.211 [2024-11-20 11:44:43.434176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.211 [2024-11-20 11:44:43.434225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.211 [2024-11-20 11:44:43.434243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.211 [2024-11-20 11:44:43.434252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.211 [2024-11-20 11:44:43.434261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.211 [2024-11-20 11:44:43.444300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.211 qpair failed and we were unable to recover it. 00:21:40.211 [2024-11-20 11:44:43.454138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.211 [2024-11-20 11:44:43.454185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.211 [2024-11-20 11:44:43.454203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.211 [2024-11-20 11:44:43.454213] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.211 [2024-11-20 11:44:43.454222] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.211 [2024-11-20 11:44:43.464379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.211 qpair failed and we were unable to recover it. 00:21:40.211 [2024-11-20 11:44:43.474282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.211 [2024-11-20 11:44:43.474330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.211 [2024-11-20 11:44:43.474347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.211 [2024-11-20 11:44:43.474357] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.211 [2024-11-20 11:44:43.474365] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.211 [2024-11-20 11:44:43.484311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.211 qpair failed and we were unable to recover it. 00:21:40.211 [2024-11-20 11:44:43.494182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.211 [2024-11-20 11:44:43.494225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.211 [2024-11-20 11:44:43.494242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.211 [2024-11-20 11:44:43.494251] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.211 [2024-11-20 11:44:43.494264] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.211 [2024-11-20 11:44:43.504543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.211 qpair failed and we were unable to recover it. 00:21:40.211 [2024-11-20 11:44:43.514242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.211 [2024-11-20 11:44:43.514295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.211 [2024-11-20 11:44:43.514313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.211 [2024-11-20 11:44:43.514323] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.211 [2024-11-20 11:44:43.514332] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.211 [2024-11-20 11:44:43.524407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.534420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.534467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.534485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.534494] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.534503] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.544689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.554533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.554580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.554598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.554608] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.554617] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.564775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.574458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.574509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.574526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.574536] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.574545] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.584659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.594670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.594718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.594736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.594746] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.594755] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.604819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.614566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.614613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.614630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.614640] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.614649] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.624899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.634654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.634699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.634716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.634725] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.634734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.644934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.654588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.654636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.654653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.654663] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.654671] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.664892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.212 [2024-11-20 11:44:43.674720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.212 [2024-11-20 11:44:43.674763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.212 [2024-11-20 11:44:43.674787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.212 [2024-11-20 11:44:43.674797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.212 [2024-11-20 11:44:43.674805] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.212 [2024-11-20 11:44:43.685005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.212 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.694881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.694929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.694947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.694956] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.694965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.704993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.714782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.714823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.714840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.714850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.714859] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.725044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.735090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.735135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.735153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.735162] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.735171] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.745238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.754949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.754998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.755015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.755029] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.755044] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.765318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.775011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.775060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.775078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.775088] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.775097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.785373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.795103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.795147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.795164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.795174] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.795183] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.805262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.815288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.815332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.815350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.815360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.815369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.472 [2024-11-20 11:44:43.825509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.472 qpair failed and we were unable to recover it. 00:21:40.472 [2024-11-20 11:44:43.835300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.472 [2024-11-20 11:44:43.835349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.472 [2024-11-20 11:44:43.835366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.472 [2024-11-20 11:44:43.835375] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.472 [2024-11-20 11:44:43.835384] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.845577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.473 [2024-11-20 11:44:43.855256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.473 [2024-11-20 11:44:43.855297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.473 [2024-11-20 11:44:43.855315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.473 [2024-11-20 11:44:43.855325] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.473 [2024-11-20 11:44:43.855334] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.865646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.473 [2024-11-20 11:44:43.875374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.473 [2024-11-20 11:44:43.875412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.473 [2024-11-20 11:44:43.875429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.473 [2024-11-20 11:44:43.875439] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.473 [2024-11-20 11:44:43.875448] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.885730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.473 [2024-11-20 11:44:43.895529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.473 [2024-11-20 11:44:43.895571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.473 [2024-11-20 11:44:43.895589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.473 [2024-11-20 11:44:43.895598] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.473 [2024-11-20 11:44:43.895607] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.905763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.473 [2024-11-20 11:44:43.915480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.473 [2024-11-20 11:44:43.915528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.473 [2024-11-20 11:44:43.915546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.473 [2024-11-20 11:44:43.915555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.473 [2024-11-20 11:44:43.915564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.925605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.473 [2024-11-20 11:44:43.935562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.473 [2024-11-20 11:44:43.935611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.473 [2024-11-20 11:44:43.935629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.473 [2024-11-20 11:44:43.935638] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.473 [2024-11-20 11:44:43.935647] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.473 [2024-11-20 11:44:43.945895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.473 qpair failed and we were unable to recover it. 00:21:40.732 [2024-11-20 11:44:43.955623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.732 [2024-11-20 11:44:43.955665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.732 [2024-11-20 11:44:43.955682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.732 [2024-11-20 11:44:43.955691] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.732 [2024-11-20 11:44:43.955700] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.732 [2024-11-20 11:44:43.965984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.732 qpair failed and we were unable to recover it. 00:21:40.732 [2024-11-20 11:44:43.975587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.732 [2024-11-20 11:44:43.975632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.732 [2024-11-20 11:44:43.975650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.732 [2024-11-20 11:44:43.975659] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.732 [2024-11-20 11:44:43.975668] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.732 [2024-11-20 11:44:43.985955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.732 qpair failed and we were unable to recover it. 00:21:40.732 [2024-11-20 11:44:43.995830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.732 [2024-11-20 11:44:43.995869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.732 [2024-11-20 11:44:43.995886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:43.995896] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:43.995904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.006093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.015716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.015757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.015777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.015787] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.015796] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.026037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.035888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.035929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.035946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.035956] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.035965] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.046287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.055926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.055973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.055990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.056000] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.056009] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.066247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.076071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.076119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.076137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.076147] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.076156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.086334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.096023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.096076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.096095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.096105] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.096117] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.106337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.116177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.116222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.116239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.116249] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.116258] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.126413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.136060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.136108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.136126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.136135] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.136145] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.146328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.156333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.156383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.156400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.156409] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.156418] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.166555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.176303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.176351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.176369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.176379] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.176388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.186554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.733 [2024-11-20 11:44:44.196435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.733 [2024-11-20 11:44:44.196480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.733 [2024-11-20 11:44:44.196498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.733 [2024-11-20 11:44:44.196507] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.733 [2024-11-20 11:44:44.196516] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.733 [2024-11-20 11:44:44.206650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.733 qpair failed and we were unable to recover it. 00:21:40.993 [2024-11-20 11:44:44.216451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.993 [2024-11-20 11:44:44.216496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.993 [2024-11-20 11:44:44.216513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.993 [2024-11-20 11:44:44.216523] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.993 [2024-11-20 11:44:44.216532] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.993 [2024-11-20 11:44:44.226519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.993 qpair failed and we were unable to recover it. 00:21:40.993 [2024-11-20 11:44:44.236727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.993 [2024-11-20 11:44:44.236771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.993 [2024-11-20 11:44:44.236789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.993 [2024-11-20 11:44:44.236798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.993 [2024-11-20 11:44:44.236807] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.993 [2024-11-20 11:44:44.246606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.993 qpair failed and we were unable to recover it. 00:21:40.993 [2024-11-20 11:44:44.256589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.993 [2024-11-20 11:44:44.256627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.993 [2024-11-20 11:44:44.256644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.993 [2024-11-20 11:44:44.256654] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.993 [2024-11-20 11:44:44.256663] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.993 [2024-11-20 11:44:44.266625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.993 qpair failed and we were unable to recover it. 00:21:40.993 [2024-11-20 11:44:44.276540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.993 [2024-11-20 11:44:44.276590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.993 [2024-11-20 11:44:44.276608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.993 [2024-11-20 11:44:44.276618] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.993 [2024-11-20 11:44:44.276628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.993 [2024-11-20 11:44:44.286823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.993 qpair failed and we were unable to recover it. 00:21:40.993 [2024-11-20 11:44:44.296677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.993 [2024-11-20 11:44:44.296721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.993 [2024-11-20 11:44:44.296739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.993 [2024-11-20 11:44:44.296748] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.993 [2024-11-20 11:44:44.296757] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.306970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.316745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.316790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.316807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.316817] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.316826] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.326847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.336822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.336867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.336884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.336894] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.336903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.347089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.356784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.356830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.356848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.356861] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.356870] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.367047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.376887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.376932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.376949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.376959] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.376968] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.387116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.396953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.396997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.397015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.397024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.397038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.407104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.417039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.417085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.417102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.417112] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.417121] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.427313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.437105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.437143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.437160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.437170] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.437179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.447186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:40.994 [2024-11-20 11:44:44.457155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:40.994 [2024-11-20 11:44:44.457202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:40.994 [2024-11-20 11:44:44.457220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:40.994 [2024-11-20 11:44:44.457230] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:40.994 [2024-11-20 11:44:44.457238] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:40.994 [2024-11-20 11:44:44.467409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:40.994 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.477155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.477198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.477216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.477225] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.477234] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.487514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.497340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.497381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.497398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.497408] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.497416] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.507510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.517445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.517491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.517508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.517518] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.517527] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.527556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.537478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.537523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.537541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.537550] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.537559] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.547688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.557427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.557471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.557488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.557498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.557507] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.567586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.577530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.254 [2024-11-20 11:44:44.577580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.254 [2024-11-20 11:44:44.577597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.254 [2024-11-20 11:44:44.577607] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.254 [2024-11-20 11:44:44.577616] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.254 [2024-11-20 11:44:44.587813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.254 qpair failed and we were unable to recover it. 00:21:41.254 [2024-11-20 11:44:44.597513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.597553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.597570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.597580] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.597589] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.607823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.617609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.617651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.617672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.617682] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.617691] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.628000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.637608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.637648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.637665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.637675] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.637684] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.647856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.657791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.657836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.657853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.657863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.657872] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.668134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.677779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.677824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.677842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.677851] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.677860] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.688066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.697925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.697971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.697988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.697998] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.698010] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.708078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.255 [2024-11-20 11:44:44.717963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.255 [2024-11-20 11:44:44.718012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.255 [2024-11-20 11:44:44.718029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.255 [2024-11-20 11:44:44.718051] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.255 [2024-11-20 11:44:44.718060] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.255 [2024-11-20 11:44:44.728222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.255 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.737929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.737976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.737993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.738003] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.738012] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.748340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.757982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.758037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.758054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.758064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.758074] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.768287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.778031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.778085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.778104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.778114] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.778122] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.788432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.798204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.798247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.798265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.798275] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.798284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.808504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.818021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.818067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.818085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.818094] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.818103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.828421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.838275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.838323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.838340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.838350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.838360] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.848587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.858345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.858388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.858406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.858416] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.858425] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.868658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.878516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.878565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.878583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.878593] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.878602] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.888827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.898354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.898398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.898415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.898425] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.898434] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.908836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.515 [2024-11-20 11:44:44.918495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.515 [2024-11-20 11:44:44.918543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.515 [2024-11-20 11:44:44.918560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.515 [2024-11-20 11:44:44.918570] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.515 [2024-11-20 11:44:44.918579] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.515 [2024-11-20 11:44:44.928613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.515 qpair failed and we were unable to recover it. 00:21:41.516 [2024-11-20 11:44:44.938453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.516 [2024-11-20 11:44:44.938496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.516 [2024-11-20 11:44:44.938513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.516 [2024-11-20 11:44:44.938522] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.516 [2024-11-20 11:44:44.938531] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.516 [2024-11-20 11:44:44.948854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.516 qpair failed and we were unable to recover it. 00:21:41.516 [2024-11-20 11:44:44.958513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.516 [2024-11-20 11:44:44.958557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.516 [2024-11-20 11:44:44.958574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.516 [2024-11-20 11:44:44.958587] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.516 [2024-11-20 11:44:44.958596] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.516 [2024-11-20 11:44:44.968865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.516 qpair failed and we were unable to recover it. 00:21:41.516 [2024-11-20 11:44:44.978727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.516 [2024-11-20 11:44:44.978768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.516 [2024-11-20 11:44:44.978787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.516 [2024-11-20 11:44:44.978797] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.516 [2024-11-20 11:44:44.978806] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.516 [2024-11-20 11:44:44.988938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.516 qpair failed and we were unable to recover it. 00:21:41.775 [2024-11-20 11:44:44.998602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.775 [2024-11-20 11:44:44.998644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.775 [2024-11-20 11:44:44.998661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.775 [2024-11-20 11:44:44.998671] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.775 [2024-11-20 11:44:44.998680] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.775 [2024-11-20 11:44:45.009056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.775 qpair failed and we were unable to recover it. 00:21:41.775 [2024-11-20 11:44:45.018764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.775 [2024-11-20 11:44:45.018808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.775 [2024-11-20 11:44:45.018826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.775 [2024-11-20 11:44:45.018835] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.775 [2024-11-20 11:44:45.018844] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.775 [2024-11-20 11:44:45.028945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.775 qpair failed and we were unable to recover it. 00:21:41.775 [2024-11-20 11:44:45.038755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.775 [2024-11-20 11:44:45.038807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.775 [2024-11-20 11:44:45.038824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.775 [2024-11-20 11:44:45.038834] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.775 [2024-11-20 11:44:45.038843] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.775 [2024-11-20 11:44:45.049133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.775 qpair failed and we were unable to recover it. 00:21:41.775 [2024-11-20 11:44:45.058837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.775 [2024-11-20 11:44:45.058880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.775 [2024-11-20 11:44:45.058897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.775 [2024-11-20 11:44:45.058907] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.775 [2024-11-20 11:44:45.058916] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.776 [2024-11-20 11:44:45.069101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.776 qpair failed and we were unable to recover it. 00:21:41.776 [2024-11-20 11:44:45.079041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.776 [2024-11-20 11:44:45.079082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.776 [2024-11-20 11:44:45.079100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.776 [2024-11-20 11:44:45.079110] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.776 [2024-11-20 11:44:45.079119] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.776 [2024-11-20 11:44:45.089092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.776 qpair failed and we were unable to recover it. 00:21:41.776 [2024-11-20 11:44:45.098967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.776 [2024-11-20 11:44:45.099012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.776 [2024-11-20 11:44:45.099029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.776 [2024-11-20 11:44:45.099045] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.776 [2024-11-20 11:44:45.099054] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.776 [2024-11-20 11:44:45.109377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.776 qpair failed and we were unable to recover it. 00:21:41.776 [2024-11-20 11:44:45.119077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:41.776 [2024-11-20 11:44:45.119126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:41.776 [2024-11-20 11:44:45.119144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:41.776 [2024-11-20 11:44:45.119154] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:41.776 [2024-11-20 11:44:45.119163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:41.776 [2024-11-20 11:44:45.129503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:41.776 qpair failed and we were unable to recover it. 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Write completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 Read completed with error (sct=0, sc=8) 00:21:42.714 starting I/O failed 00:21:42.714 [2024-11-20 11:44:46.134310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Read completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.092 Write completed with error (sct=0, sc=8) 00:21:44.092 starting I/O failed 00:21:44.093 Write completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Write completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Write completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Write completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Read completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 Write completed with error (sct=0, sc=8) 00:21:44.093 starting I/O failed 00:21:44.093 [2024-11-20 11:44:47.139530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:44.093 [2024-11-20 11:44:47.144732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.144784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.144813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.144828] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.144841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:44.093 [2024-11-20 11:44:47.155506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.165222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.165270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.165288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.165298] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.165308] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:44.093 [2024-11-20 11:44:47.175436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.185284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.185334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.185361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.185375] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.185388] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:44.093 [2024-11-20 11:44:47.195596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.205146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.205189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.205207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.205217] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.205226] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:21:44.093 [2024-11-20 11:44:47.215640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.215709] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:21:44.093 A controller has encountered a failure and is being reset. 00:21:44.093 [2024-11-20 11:44:47.225325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.225365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.225385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.225395] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.225405] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:44.093 [2024-11-20 11:44:47.235607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.245303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.245358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.245384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.245397] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.245409] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:44.093 [2024-11-20 11:44:47.255699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.265486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.093 [2024-11-20 11:44:47.265528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.093 [2024-11-20 11:44:47.265546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.093 [2024-11-20 11:44:47.265555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.093 [2024-11-20 11:44:47.265564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:44.093 [2024-11-20 11:44:47.275865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:44.093 qpair failed and we were unable to recover it. 00:21:44.093 [2024-11-20 11:44:47.276030] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:44.093 [2024-11-20 11:44:47.308550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:44.093 Controller properly reset. 00:21:44.093 Initializing NVMe Controllers 00:21:44.093 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.093 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:44.093 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:44.093 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:44.093 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:44.093 Initialization complete. Launching workers. 00:21:44.093 Starting thread on core 1 00:21:44.093 Starting thread on core 2 00:21:44.093 Starting thread on core 3 00:21:44.093 Starting thread on core 0 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:21:44.093 00:21:44.093 real 0m13.010s 00:21:44.093 user 0m26.821s 00:21:44.093 sys 0m3.351s 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:44.093 ************************************ 00:21:44.093 END TEST nvmf_target_disconnect_tc2 00:21:44.093 ************************************ 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 10.0.0.1 ']' 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:44.093 ************************************ 00:21:44.093 START TEST nvmf_target_disconnect_tc3 00:21:44.093 ************************************ 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1716790 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:21:44.093 11:44:47 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 alt_traddr:10.0.0.1' 00:21:45.994 11:44:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1715828 00:21:45.994 11:44:49 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Read completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 Write completed with error (sct=0, sc=8) 00:21:47.371 starting I/O failed 00:21:47.371 [2024-11-20 11:44:50.652264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:48.314 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1715828 Killed "${NVMF_APP[@]}" "$@" 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 10.0.0.1 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@328 -- # nvmfpid=1717331 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@329 -- # waitforlisten 1717331 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1717331 ']' 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.314 11:44:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.314 [2024-11-20 11:44:51.533505] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:48.314 [2024-11-20 11:44:51.533572] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.314 [2024-11-20 11:44:51.626844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Write completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 Read completed with error (sct=0, sc=8) 00:21:48.314 starting I/O failed 00:21:48.314 [2024-11-20 11:44:51.657435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:48.314 [2024-11-20 11:44:51.671513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.314 [2024-11-20 11:44:51.671552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.314 [2024-11-20 11:44:51.671561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.314 [2024-11-20 11:44:51.671570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.314 [2024-11-20 11:44:51.671577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.314 [2024-11-20 11:44:51.672867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:48.314 [2024-11-20 11:44:51.672967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:48.314 [2024-11-20 11:44:51.673080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.314 [2024-11-20 11:44:51.673081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 Malloc0 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 [2024-11-20 11:44:52.484621] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23e8570/0x23f4040) succeed. 00:21:49.251 [2024-11-20 11:44:52.494255] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23e9c00/0x24356e0) succeed. 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.251 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 10.0.0.1 -s 4420 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 [2024-11-20 11:44:52.644851] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.1 port 4420 *** 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 10.0.0.1 -s 4420 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.252 11:44:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1716790 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Read completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 Write completed with error (sct=0, sc=8) 00:21:49.252 starting I/O failed 00:21:49.252 [2024-11-20 11:44:52.662322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.252 [2024-11-20 11:44:52.663842] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:49.252 [2024-11-20 11:44:52.663864] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:49.252 [2024-11-20 11:44:52.663873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:50.628 [2024-11-20 11:44:53.667696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.628 qpair failed and we were unable to recover it. 00:21:50.628 [2024-11-20 11:44:53.669151] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:50.628 [2024-11-20 11:44:53.669171] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:50.628 [2024-11-20 11:44:53.669179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:51.563 [2024-11-20 11:44:54.672880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:51.563 qpair failed and we were unable to recover it. 00:21:51.563 [2024-11-20 11:44:54.674257] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:51.563 [2024-11-20 11:44:54.674276] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:51.563 [2024-11-20 11:44:54.674284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:52.499 [2024-11-20 11:44:55.677950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:52.499 qpair failed and we were unable to recover it. 00:21:52.499 [2024-11-20 11:44:55.679331] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:52.499 [2024-11-20 11:44:55.679350] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:52.499 [2024-11-20 11:44:55.679358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:53.434 [2024-11-20 11:44:56.683205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:53.434 qpair failed and we were unable to recover it. 00:21:53.434 [2024-11-20 11:44:56.684537] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:53.434 [2024-11-20 11:44:56.684556] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:53.434 [2024-11-20 11:44:56.684564] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:54.371 [2024-11-20 11:44:57.688307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:54.371 qpair failed and we were unable to recover it. 00:21:54.371 [2024-11-20 11:44:57.689650] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:54.371 [2024-11-20 11:44:57.689672] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:54.371 [2024-11-20 11:44:57.689680] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:55.403 [2024-11-20 11:44:58.693373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.403 qpair failed and we were unable to recover it. 00:21:55.403 [2024-11-20 11:44:58.694691] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:55.403 [2024-11-20 11:44:58.694709] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:55.403 [2024-11-20 11:44:58.694717] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:56.339 [2024-11-20 11:44:59.698519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:56.339 qpair failed and we were unable to recover it. 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Read completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 Write completed with error (sct=0, sc=8) 00:21:57.275 starting I/O failed 00:21:57.275 [2024-11-20 11:45:00.703585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:57.275 [2024-11-20 11:45:00.705073] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:57.275 [2024-11-20 11:45:00.705094] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:57.275 [2024-11-20 11:45:00.705103] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:21:58.653 [2024-11-20 11:45:01.709053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:58.653 qpair failed and we were unable to recover it. 00:21:58.653 [2024-11-20 11:45:01.710450] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:58.653 [2024-11-20 11:45:01.710473] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:58.653 [2024-11-20 11:45:01.710482] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:21:59.590 [2024-11-20 11:45:02.714173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:59.590 qpair failed and we were unable to recover it. 00:21:59.590 [2024-11-20 11:45:02.714311] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:21:59.590 A controller has encountered a failure and is being reset. 00:21:59.590 Resorting to new failover address 10.0.0.1 00:21:59.590 [2024-11-20 11:45:02.715943] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:59.590 [2024-11-20 11:45:02.715972] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:59.590 [2024-11-20 11:45:02.715985] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:22:00.529 [2024-11-20 11:45:03.719732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:00.529 qpair failed and we were unable to recover it. 00:22:00.529 [2024-11-20 11:45:03.721162] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:00.529 [2024-11-20 11:45:03.721181] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:00.529 [2024-11-20 11:45:03.721190] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:22:01.468 [2024-11-20 11:45:04.724938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:22:01.468 qpair failed and we were unable to recover it. 00:22:01.468 [2024-11-20 11:45:04.725062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:01.468 [2024-11-20 11:45:04.725177] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:01.468 [2024-11-20 11:45:04.756439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:22:01.468 Controller properly reset. 00:22:01.468 Initializing NVMe Controllers 00:22:01.468 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.468 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:01.468 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:01.468 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:01.468 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:01.468 Initialization complete. Launching workers. 00:22:01.468 Starting thread on core 1 00:22:01.468 Starting thread on core 2 00:22:01.468 Starting thread on core 3 00:22:01.468 Starting thread on core 0 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:22:01.468 00:22:01.468 real 0m17.360s 00:22:01.468 user 1m1.819s 00:22:01.468 sys 0m5.284s 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.468 ************************************ 00:22:01.468 END TEST nvmf_target_disconnect_tc3 00:22:01.468 ************************************ 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:22:01.468 rmmod nvme_rdma 00:22:01.468 rmmod nvme_fabrics 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 1717331 ']' 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 1717331 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1717331 ']' 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1717331 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.468 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717331 00:22:01.727 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:22:01.727 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:22:01.728 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717331' 00:22:01.728 killing process with pid 1717331 00:22:01.728 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1717331 00:22:01.728 11:45:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1717331 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@264 -- # local dev 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # return 0 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:22:01.987 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@284 -- # iptr 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:01.988 00:22:01.988 real 0m38.049s 00:22:01.988 user 2m25.073s 00:22:01.988 sys 0m13.725s 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:01.988 ************************************ 00:22:01.988 END TEST nvmf_target_disconnect 00:22:01.988 ************************************ 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # [[ rdma == \t\c\p ]] 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@37 -- # [[ rdma == \r\d\m\a ]] 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.988 ************************************ 00:22:01.988 START TEST dma 00:22:01.988 ************************************ 00:22:01.988 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:02.248 * Looking for test storage... 00:22:02.248 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.248 --rc genhtml_branch_coverage=1 00:22:02.248 --rc genhtml_function_coverage=1 00:22:02.248 --rc genhtml_legend=1 00:22:02.248 --rc geninfo_all_blocks=1 00:22:02.248 --rc geninfo_unexecuted_blocks=1 00:22:02.248 00:22:02.248 ' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.248 --rc genhtml_branch_coverage=1 00:22:02.248 --rc genhtml_function_coverage=1 00:22:02.248 --rc genhtml_legend=1 00:22:02.248 --rc geninfo_all_blocks=1 00:22:02.248 --rc geninfo_unexecuted_blocks=1 00:22:02.248 00:22:02.248 ' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.248 --rc genhtml_branch_coverage=1 00:22:02.248 --rc genhtml_function_coverage=1 00:22:02.248 --rc genhtml_legend=1 00:22:02.248 --rc geninfo_all_blocks=1 00:22:02.248 --rc geninfo_unexecuted_blocks=1 00:22:02.248 00:22:02.248 ' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.248 --rc genhtml_branch_coverage=1 00:22:02.248 --rc genhtml_function_coverage=1 00:22:02.248 --rc genhtml_legend=1 00:22:02.248 --rc geninfo_all_blocks=1 00:22:02.248 --rc geninfo_unexecuted_blocks=1 00:22:02.248 00:22:02.248 ' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.248 11:45:05 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@50 -- # : 0 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:02.249 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # MALLOC_BDEV_SIZE=256 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@14 -- # subsystem=0 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@89 -- # nvmftestinit 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@260 -- # remove_target_ns 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # xtrace_disable 00:22:02.249 11:45:05 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@131 -- # pci_devs=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@135 -- # net_devs=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@136 -- # e810=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@136 -- # local -ga e810 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@137 -- # x722=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@137 -- # local -ga x722 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@138 -- # mlx=() 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@138 -- # local -ga mlx 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:08.816 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:08.816 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:08.816 Found net devices under 0000:18:00.0: mlx_0_0 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:08.816 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:08.817 Found net devices under 0000:18:00.1: mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@253 -- # get_rdma_if_list 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # rdma_devs=() 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@89 -- # continue 2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@89 -- # continue 2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@262 -- # is_hw=yes 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@61 -- # uname 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_cm 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_core 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_umad 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe iw_cm 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@28 -- # local -g _dev 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@44 -- # ips=() 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@58 -- # key_initiator=target1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@11 -- # local val=167772161 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:08.817 10.0.0.1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@11 -- # local val=167772162 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:22:08.817 10.0.0.2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:08.817 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:08.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:22:08.818 00:22:08.818 --- 10.0.0.2 ping statistics --- 00:22:08.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.818 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:08.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms 00:22:08.818 00:22:08.818 --- 10.0.0.2 ping statistics --- 00:22:08.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.818 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@270 -- # return 0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@107 -- # local dev=target1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@90 -- # nvmfappstart -m 0x3 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # nvmfpid=1722472 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@329 -- # waitforlisten 1722472 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 1722472 ']' 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.818 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.819 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.819 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.819 11:45:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.819 [2024-11-20 11:45:11.702042] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:08.819 [2024-11-20 11:45:11.702106] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.819 [2024-11-20 11:45:11.783369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:08.819 [2024-11-20 11:45:11.835389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.819 [2024-11-20 11:45:11.835433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.819 [2024-11-20 11:45:11.835443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.819 [2024-11-20 11:45:11.835452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.819 [2024-11-20 11:45:11.835461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.819 [2024-11-20 11:45:11.836603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.819 [2024-11-20 11:45:11.836606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.077 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.077 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:22:09.077 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:09.077 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.077 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@92 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 [2024-11-20 11:45:12.612293] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdfeb60/0xe03050) succeed. 00:22:09.337 [2024-11-20 11:45:12.621012] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe000b0/0xe446f0) succeed. 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 Malloc0 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@95 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.0.0.2 -s 4420 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:09.337 [2024-11-20 11:45:12.787537] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4420 *** 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # gen_nvmf_target_json 0 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # config=() 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # local subsystem config 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:22:09.337 { 00:22:09.337 "params": { 00:22:09.337 "name": "Nvme$subsystem", 00:22:09.337 "trtype": "$TEST_TRANSPORT", 00:22:09.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.337 "adrfam": "ipv4", 00:22:09.337 "trsvcid": "$NVMF_PORT", 00:22:09.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.337 "hdgst": ${hdgst:-false}, 00:22:09.337 "ddgst": ${ddgst:-false} 00:22:09.337 }, 00:22:09.337 "method": "bdev_nvme_attach_controller" 00:22:09.337 } 00:22:09.337 EOF 00:22:09.337 )") 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # cat 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@396 -- # jq . 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@397 -- # IFS=, 00:22:09.337 11:45:12 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:22:09.337 "params": { 00:22:09.337 "name": "Nvme0", 00:22:09.337 "trtype": "rdma", 00:22:09.337 "traddr": "10.0.0.2", 00:22:09.337 "adrfam": "ipv4", 00:22:09.337 "trsvcid": "4420", 00:22:09.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:09.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:09.337 "hdgst": false, 00:22:09.337 "ddgst": false 00:22:09.337 }, 00:22:09.337 "method": "bdev_nvme_attach_controller" 00:22:09.337 }' 00:22:09.597 [2024-11-20 11:45:12.837967] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:09.597 [2024-11-20 11:45:12.838016] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722609 ] 00:22:09.597 [2024-11-20 11:45:12.913111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:09.597 [2024-11-20 11:45:12.960611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.597 [2024-11-20 11:45:12.960615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.166 bdev Nvme0n1 reports 1 memory domains 00:22:16.166 bdev Nvme0n1 supports RDMA memory domain 00:22:16.166 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:16.166 ========================================================================== 00:22:16.166 Latency [us] 00:22:16.167 IOPS MiB/s Average min max 00:22:16.167 Core 2: 21230.08 82.93 752.80 257.23 8320.62 00:22:16.167 Core 3: 21369.23 83.47 747.92 253.20 8439.02 00:22:16.167 ========================================================================== 00:22:16.167 Total : 42599.30 166.40 750.35 253.20 8439.02 00:22:16.167 00:22:16.167 Total operations: 213076, translate 213076 pull_push 0 memzero 0 00:22:16.167 11:45:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:22:16.167 11:45:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@103 -- # gen_malloc_json 00:22:16.167 11:45:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # jq . 00:22:16.167 [2024-11-20 11:45:18.408301] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:16.167 [2024-11-20 11:45:18.408361] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723343 ] 00:22:16.167 [2024-11-20 11:45:18.481239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.167 [2024-11-20 11:45:18.525793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.167 [2024-11-20 11:45:18.525796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.358 bdev Malloc0 reports 2 memory domains 00:22:20.358 bdev Malloc0 doesn't support RDMA memory domain 00:22:20.358 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:20.358 ========================================================================== 00:22:20.358 Latency [us] 00:22:20.358 IOPS MiB/s Average min max 00:22:20.358 Core 2: 14047.28 54.87 1138.29 426.81 2239.28 00:22:20.358 Core 3: 14179.24 55.39 1127.66 432.83 2442.31 00:22:20.358 ========================================================================== 00:22:20.358 Total : 28226.52 110.26 1132.95 426.81 2442.31 00:22:20.358 00:22:20.358 Total operations: 141180, translate 0 pull_push 564720 memzero 0 00:22:20.358 11:45:23 nvmf_rdma.nvmf_host.dma -- host/dma.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:22:20.358 11:45:23 nvmf_rdma.nvmf_host.dma -- host/dma.sh@106 -- # gen_lvol_nvme_json 0 00:22:20.358 11:45:23 nvmf_rdma.nvmf_host.dma -- host/dma.sh@44 -- # local subsystem=0 00:22:20.358 11:45:23 nvmf_rdma.nvmf_host.dma -- host/dma.sh@46 -- # jq . 00:22:20.617 Ignoring -M option 00:22:20.617 [2024-11-20 11:45:23.850983] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:20.617 [2024-11-20 11:45:23.851067] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724064 ] 00:22:20.617 [2024-11-20 11:45:23.923443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.617 [2024-11-20 11:45:23.967832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.617 [2024-11-20 11:45:23.967834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.187 bdev a4939065-f8e5-44d5-a8b3-f9d5016c6f34 reports 1 memory domains 00:22:27.187 bdev a4939065-f8e5-44d5-a8b3-f9d5016c6f34 supports RDMA memory domain 00:22:27.187 Initialization complete, running randread IO for 5 sec on 2 cores 00:22:27.187 ========================================================================== 00:22:27.187 Latency [us] 00:22:27.187 IOPS MiB/s Average min max 00:22:27.187 Core 2: 73814.40 288.34 215.95 85.48 3349.74 00:22:27.187 Core 3: 74870.88 292.46 212.88 76.41 3284.28 00:22:27.187 ========================================================================== 00:22:27.187 Total : 148685.28 580.80 214.40 76.41 3349.74 00:22:27.187 00:22:27.187 Total operations: 743507, translate 0 pull_push 0 memzero 743507 00:22:27.187 11:45:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' 00:22:27.187 [2024-11-20 11:45:29.526239] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:28.565 Initializing NVMe Controllers 00:22:28.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:22:28.565 Associating RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:28.565 Initialization complete. Launching workers. 00:22:28.565 ======================================================== 00:22:28.565 Latency(us) 00:22:28.565 Device Information : IOPS MiB/s Average min max 00:22:28.565 RDMA (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.73 7.85 8027.63 4987.74 10975.97 00:22:28.565 ======================================================== 00:22:28.565 Total : 2008.73 7.85 8027.63 4987.74 10975.97 00:22:28.565 00:22:28.565 11:45:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@112 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:22:28.565 11:45:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@112 -- # gen_lvol_nvme_json 0 00:22:28.565 11:45:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@44 -- # local subsystem=0 00:22:28.565 11:45:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@46 -- # jq . 00:22:28.565 [2024-11-20 11:45:31.872497] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:28.565 [2024-11-20 11:45:31.872558] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725146 ] 00:22:28.565 [2024-11-20 11:45:31.947510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:28.565 [2024-11-20 11:45:31.994811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.565 [2024-11-20 11:45:31.994813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.136 bdev abbac95e-14e8-4843-a0ca-89c9acf0c304 reports 1 memory domains 00:22:35.136 bdev abbac95e-14e8-4843-a0ca-89c9acf0c304 supports RDMA memory domain 00:22:35.136 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:35.136 ========================================================================== 00:22:35.136 Latency [us] 00:22:35.136 IOPS MiB/s Average min max 00:22:35.136 Core 2: 18959.21 74.06 843.04 18.78 12304.67 00:22:35.136 Core 3: 18801.46 73.44 850.20 18.27 12560.15 00:22:35.136 ========================================================================== 00:22:35.136 Total : 37760.68 147.50 846.60 18.27 12560.15 00:22:35.136 00:22:35.136 Total operations: 188863, translate 188763 pull_push 0 memzero 100 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@114 -- # trap - SIGINT SIGTERM EXIT 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # nvmftestfini 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@99 -- # sync 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # set +e 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:22:35.136 rmmod nvme_rdma 00:22:35.136 rmmod nvme_fabrics 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # set -e 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # return 0 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # '[' -n 1722472 ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@337 -- # killprocess 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 1722472 ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722472' 00:22:35.136 killing process with pid 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 1722472 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # nvmf_fini 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@264 -- # local dev 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:35.136 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@130 -- # return 0 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@41 -- # _dev=0 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@41 -- # dev_map=() 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/setup.sh@284 -- # iptr 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@542 -- # iptables-restore 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@542 -- # iptables-save 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:35.137 00:22:35.137 real 0m32.500s 00:22:35.137 user 1m36.581s 00:22:35.137 sys 0m5.853s 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:35.137 ************************************ 00:22:35.137 END TEST dma 00:22:35.137 ************************************ 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # [[ 0 -eq 1 ]] 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:22:35.137 00:22:35.137 real 5m21.070s 00:22:35.137 user 12m37.595s 00:22:35.137 sys 1m34.910s 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.137 11:45:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.137 ************************************ 00:22:35.137 END TEST nvmf_host 00:22:35.137 ************************************ 00:22:35.137 11:45:37 nvmf_rdma -- nvmf/nvmf.sh@15 -- # [[ rdma = \t\c\p ]] 00:22:35.137 00:22:35.137 real 16m48.156s 00:22:35.137 user 40m52.701s 00:22:35.137 sys 5m8.929s 00:22:35.137 11:45:37 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.137 11:45:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.137 ************************************ 00:22:35.137 END TEST nvmf_rdma 00:22:35.137 ************************************ 00:22:35.137 11:45:37 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:35.137 11:45:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.137 11:45:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.137 11:45:37 -- common/autotest_common.sh@10 -- # set +x 00:22:35.137 ************************************ 00:22:35.137 START TEST spdkcli_nvmf_rdma 00:22:35.137 ************************************ 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:35.137 * Looking for test storage... 00:22:35.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.137 --rc genhtml_branch_coverage=1 00:22:35.137 --rc genhtml_function_coverage=1 00:22:35.137 --rc genhtml_legend=1 00:22:35.137 --rc geninfo_all_blocks=1 00:22:35.137 --rc geninfo_unexecuted_blocks=1 00:22:35.137 00:22:35.137 ' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.137 --rc genhtml_branch_coverage=1 00:22:35.137 --rc genhtml_function_coverage=1 00:22:35.137 --rc genhtml_legend=1 00:22:35.137 --rc geninfo_all_blocks=1 00:22:35.137 --rc geninfo_unexecuted_blocks=1 00:22:35.137 00:22:35.137 ' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.137 --rc genhtml_branch_coverage=1 00:22:35.137 --rc genhtml_function_coverage=1 00:22:35.137 --rc genhtml_legend=1 00:22:35.137 --rc geninfo_all_blocks=1 00:22:35.137 --rc geninfo_unexecuted_blocks=1 00:22:35.137 00:22:35.137 ' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.137 --rc genhtml_branch_coverage=1 00:22:35.137 --rc genhtml_function_coverage=1 00:22:35.137 --rc genhtml_legend=1 00:22:35.137 --rc geninfo_all_blocks=1 00:22:35.137 --rc geninfo_unexecuted_blocks=1 00:22:35.137 00:22:35.137 ' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0049fda6-1adc-e711-906e-0017a4403562 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVME_HOSTID=0049fda6-1adc-e711-906e-0017a4403562 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:35.137 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/setup.sh 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@50 -- # : 0 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:35.138 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1725982 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1725982 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 1725982 ']' 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.138 11:45:38 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.138 [2024-11-20 11:45:38.303899] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:35.138 [2024-11-20 11:45:38.303966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725982 ] 00:22:35.138 [2024-11-20 11:45:38.382992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:35.138 [2024-11-20 11:45:38.430397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.138 [2024-11-20 11:45:38.430398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # '[' -z rdma ']' 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.705 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:35.706 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:35.706 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@260 -- # remove_target_ns 00:22:35.706 11:45:39 spdkcli_nvmf_rdma -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:35.706 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:22:35.706 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:35.964 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:35.964 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:35.964 11:45:39 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # xtrace_disable 00:22:35.964 11:45:39 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@131 -- # pci_devs=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@135 -- # net_devs=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@136 -- # e810=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@136 -- # local -ga e810 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@137 -- # x722=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@137 -- # local -ga x722 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@138 -- # mlx=() 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@138 -- # local -ga mlx 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.525 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@163 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@164 -- # pci_devs+=("${x722[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@165 -- # pci_devs+=("${mlx[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@169 -- # [[ mlx5 == mlx5 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@170 -- # pci_devs=("${mlx[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:22:42.526 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@183 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:22:42.526 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@184 -- # [[ mlx5_core == unknown ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@188 -- # [[ mlx5_core == unbound ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@192 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@193 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@194 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@204 -- # NVME_CONNECT='nvme connect -i 15' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@214 -- # [[ mlx5 == e810 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:42.526 Found net devices under 0000:18:00.0: mlx_0_0 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@232 -- # [[ rdma == tcp ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:42.526 Found net devices under 0000:18:00.1: mlx_0_1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@252 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@253 -- # get_rdma_if_list 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # rdma_devs=() 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # local net_dev rxe_net_dev rxe_net_devs rdma_devs 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # mapfile -t rxe_net_devs 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # rxe_cfg rxe-net 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # (( 2 == 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@89 -- # continue 2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # for net_dev in "${net_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@88 -- # rdma_devs+=("$net_dev") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@89 -- # continue 2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # (( 2 > 0 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@95 -- # net_devs=("${rdma_devs[@]}") 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@262 -- # is_hw=yes 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@265 -- # [[ rdma == tcp ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@267 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@268 -- # nvmf_rdma_init 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@244 -- # local total_initiator_target_pairs=1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@246 -- # load_ib_rdma_modules 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@61 -- # uname 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@61 -- # '[' Linux '!=' Linux ']' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_cm 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_core 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_umad 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_uverbs 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe iw_cm 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe rdma_cm 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_ucm 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@247 -- # setup_interfaces 1 phy rdma 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=rdma ip_pool=0x0a000001 max 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@28 -- # local -g _dev 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 rdma 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@44 -- # ips=() 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=rdma ips 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@51 -- # [[ rdma == tcp ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@53 -- # [[ rdma == rdma ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@58 -- # key_initiator=target1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@64 -- # initiator=mlx_0_0 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@64 -- # target=mlx_0_1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@70 -- # [[ rdma == tcp ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@72 -- # set_ip mlx_0_0 167772161 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@204 -- # local dev=mlx_0_0 ip=167772161 in_ns= 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@11 -- # local val=167772161 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev mlx_0_0' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev mlx_0_0 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/mlx_0_0/ifalias' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_0/ifalias 00:22:42.526 10.0.0.1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@73 -- # set_ip mlx_0_1 167772162 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@204 -- # local dev=mlx_0_1 ip=167772162 in_ns= 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@11 -- # local val=167772162 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.2/24 dev mlx_0_1' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.2/24 dev mlx_0_1 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | tee /sys/class/net/mlx_0_1/ifalias' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@210 -- # tee /sys/class/net/mlx_0_1/ifalias 00:22:42.526 10.0.0.2 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@75 -- # set_up mlx_0_0 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@214 -- # local dev=mlx_0_0 in_ns= 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_0 up' 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@217 -- # ip link set mlx_0_0 up 00:22:42.526 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@76 -- # set_up mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@214 -- # local dev=mlx_0_1 in_ns= 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@217 -- # eval ' ip link set mlx_0_1 up' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@217 -- # ip link set mlx_0_1 up 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@81 -- # [[ rdma == tcp ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=mlx_0_0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@99 -- # get_rdma_initiator_ip_address 0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.2 NVMF_TARGET_NS_CMD 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:42.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:22:42.527 00:22:42.527 --- 10.0.0.2 ping statistics --- 00:22:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.527 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@100 -- # get_rdma_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address 0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:42.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.023 ms 00:22:42.527 00:22:42.527 --- 10.0.0.2 ping statistics --- 00:22:42.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.527 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@270 -- # return 0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2=mlx_0_0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@334 -- # get_rdma_initiator_ip_address 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@335 -- # get_rdma_initiator_ip_address 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@192 -- # get_rdma_target_ip_address 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@337 -- # get_rdma_target_ip_address 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target0 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target0 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target0 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_1 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_1/ifalias' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_1/ifalias 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@338 -- # get_rdma_target_ip_address 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@200 -- # get_target_ip_address 1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@179 -- # get_ip_address target1 '' 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@165 -- # local dev=target1 in_ns= ip 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@107 -- # local dev=target1 00:22:42.527 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@109 -- # [[ -n mlx_0_0 ]] 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@110 -- # echo mlx_0_0 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@168 -- # dev=mlx_0_0 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/mlx_0_0/ifalias' 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # cat /sys/class/net/mlx_0_0/ifalias 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP=10.0.0.1 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # [[ rdma == \r\d\m\a ]] 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # '[' rdma == tcp ']' 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # '[' rdma == rdma ']' 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # modprobe nvme-rdma 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=10.0.0.2 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:42.528 11:45:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:42.528 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:42.528 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:42.528 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:42.528 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:42.528 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:42.528 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:42.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4260 IPv4'\'' '\''10.0.0.2:4260'\'' True 00:22:42.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 10.0.0.2 4260 IPv4'\'' '\''10.0.0.2:4260'\'' True 00:22:42.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 10.0.0.2 4260 IPv4'\'' '\''10.0.0.2:4260'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 10.0.0.2 4261 IPv4'\'' '\''10.0.0.2:4261'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4261 IPv4'\'' '\''10.0.0.2:4261'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4262 IPv4'\'' '\''10.0.0.2:4262'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:42.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:42.528 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:42.528 ' 00:22:45.057 [2024-11-20 11:45:48.452940] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x79cb20/0x7aaab0) succeed. 00:22:45.057 [2024-11-20 11:45:48.462907] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x79e200/0x82ab40) succeed. 00:22:46.430 [2024-11-20 11:45:49.738624] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4260 *** 00:22:49.057 [2024-11-20 11:45:51.985784] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4261 *** 00:22:50.961 [2024-11-20 11:45:53.920107] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 10.0.0.2 port 4262 *** 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:52.336 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:52.336 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:52.336 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4260 IPv4', '10.0.0.2:4260', True] 00:22:52.336 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 10.0.0.2 4260 IPv4', '10.0.0.2:4260', True] 00:22:52.336 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 10.0.0.2 4260 IPv4', '10.0.0.2:4260', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 10.0.0.2 4261 IPv4', '10.0.0.2:4261', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4261 IPv4', '10.0.0.2:4261', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 10.0.0.2 4262 IPv4', '10.0.0.2:4262', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:52.336 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:52.336 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:22:52.336 11:45:55 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:52.595 11:45:55 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:52.595 11:45:56 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:52.595 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:52.595 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:52.595 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:52.595 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 10.0.0.2 4262'\'' '\''10.0.0.2:4262'\'' 00:22:52.595 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''10.0.0.2:4261'\'' 00:22:52.595 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:52.595 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:52.595 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:52.595 ' 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 10.0.0.2 4262', '10.0.0.2:4262', False] 00:22:57.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '10.0.0.2:4261', False] 00:22:57.882 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:57.882 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:57.882 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1725982 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 1725982 ']' 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 1725982 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725982 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725982' 00:22:57.882 killing process with pid 1725982 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 1725982 00:22:57.882 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 1725982 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@99 -- # sync 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # '[' rdma == tcp ']' 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # '[' rdma == rdma ']' 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # set +e 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:58.141 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # modprobe -v -r nvme-rdma 00:22:58.141 rmmod nvme_rdma 00:22:58.141 rmmod nvme_fabrics 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # set -e 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # return 0 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # nvmf_fini 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@264 -- # local dev 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@130 -- # return 0 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_1/address ]] 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@279 -- # flush_ip mlx_0_1 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@221 -- # local dev=mlx_0_1 in_ns= 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_1' 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_1 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/mlx_0_0/address ]] 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@279 -- # flush_ip mlx_0_0 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@221 -- # local dev=mlx_0_0 in_ns= 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev mlx_0_0' 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@224 -- # ip addr flush dev mlx_0_0 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@41 -- # _dev=0 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@41 -- # dev_map=() 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/setup.sh@284 -- # iptr 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@542 -- # iptables-save 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@542 -- # iptables-restore 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:58.400 00:22:58.400 real 0m23.646s 00:22:58.400 user 0m50.341s 00:22:58.400 sys 0m5.958s 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.400 11:46:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:58.400 ************************************ 00:22:58.400 END TEST spdkcli_nvmf_rdma 00:22:58.400 ************************************ 00:22:58.400 11:46:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:58.400 11:46:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:58.400 11:46:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:58.400 11:46:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:58.400 11:46:01 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:58.400 11:46:01 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:58.400 11:46:01 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:58.400 11:46:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.400 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:58.400 11:46:01 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:58.400 11:46:01 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:58.400 11:46:01 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:58.400 11:46:01 -- common/autotest_common.sh@10 -- # set +x 00:23:03.669 INFO: APP EXITING 00:23:03.669 INFO: killing all VMs 00:23:03.669 INFO: killing vhost app 00:23:03.669 INFO: EXIT DONE 00:23:06.200 Waiting for block devices as requested 00:23:06.200 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:23:06.200 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:06.200 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:06.459 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:06.459 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:06.459 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:06.717 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:06.717 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:06.717 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:06.976 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:06.976 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:06.976 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:07.235 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:07.235 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:07.235 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:07.494 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:07.494 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:10.778 Cleaning 00:23:10.779 Removing: /var/run/dpdk/spdk0/config 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:10.779 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:10.779 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:10.779 Removing: /var/run/dpdk/spdk1/config 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:10.779 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:10.779 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:10.779 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:10.779 Removing: /var/run/dpdk/spdk2/config 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:10.779 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:10.779 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:10.779 Removing: /var/run/dpdk/spdk3/config 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:10.779 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:10.779 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:10.779 Removing: /var/run/dpdk/spdk4/config 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:10.779 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:10.779 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:10.779 Removing: /dev/shm/bdevperf_trace.pid1522795 00:23:10.779 Removing: /dev/shm/bdev_svc_trace.1 00:23:10.779 Removing: /dev/shm/nvmf_trace.0 00:23:10.779 Removing: /dev/shm/spdk_tgt_trace.pid1485274 00:23:10.779 Removing: /var/run/dpdk/spdk0 00:23:10.779 Removing: /var/run/dpdk/spdk1 00:23:10.779 Removing: /var/run/dpdk/spdk2 00:23:10.779 Removing: /var/run/dpdk/spdk3 00:23:10.779 Removing: /var/run/dpdk/spdk4 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1482348 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1483632 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1485274 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1485814 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1486574 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1486756 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1487540 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1487656 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1487885 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1492402 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1493877 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1494120 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1494369 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1494630 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1494887 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1495091 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1495294 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1495527 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1496126 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1498547 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1498936 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1499160 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1499337 00:23:10.779 Removing: /var/run/dpdk/spdk_pid1499743 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1499866 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1500319 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1500329 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1500662 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1500726 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1500942 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1501120 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1501557 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1501691 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1502002 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1505510 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1509009 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1517497 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1518223 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1522795 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1523072 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1526678 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1531649 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1533739 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1542270 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1557158 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1560502 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1596772 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1601029 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1605852 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1613817 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1646587 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1647466 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1648276 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1649240 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1653128 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1657646 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1663696 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1663698 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1667569 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1668012 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1668473 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1669023 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1669045 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1673435 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1673885 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1677455 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1679641 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1684771 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1693311 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1693359 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1709755 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1710000 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1714878 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1715283 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1716790 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1722609 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1723343 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1724064 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1724785 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1725146 00:23:11.037 Removing: /var/run/dpdk/spdk_pid1725982 00:23:11.294 Clean 00:23:11.294 11:46:14 -- common/autotest_common.sh@1453 -- # return 0 00:23:11.295 11:46:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:11.295 11:46:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.295 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:23:11.295 11:46:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:11.295 11:46:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.295 11:46:14 -- common/autotest_common.sh@10 -- # set +x 00:23:11.295 11:46:14 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:11.295 11:46:14 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:23:11.295 11:46:14 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:23:11.295 11:46:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:11.295 11:46:14 -- spdk/autotest.sh@398 -- # hostname 00:23:11.295 11:46:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-34 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:23:11.551 geninfo: WARNING: invalid characters removed from testname! 00:23:33.475 11:46:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:34.413 11:46:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:36.317 11:46:39 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:37.695 11:46:41 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:39.601 11:46:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:41.504 11:46:44 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:43.409 11:46:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:43.409 11:46:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:43.409 11:46:46 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:23:43.409 11:46:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:43.409 11:46:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:43.409 11:46:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:43.409 + [[ -n 1410264 ]] 00:23:43.409 + sudo kill 1410264 00:23:43.420 [Pipeline] } 00:23:43.437 [Pipeline] // stage 00:23:43.444 [Pipeline] } 00:23:43.460 [Pipeline] // timeout 00:23:43.465 [Pipeline] } 00:23:43.483 [Pipeline] // catchError 00:23:43.490 [Pipeline] } 00:23:43.507 [Pipeline] // wrap 00:23:43.513 [Pipeline] } 00:23:43.529 [Pipeline] // catchError 00:23:43.540 [Pipeline] stage 00:23:43.543 [Pipeline] { (Epilogue) 00:23:43.558 [Pipeline] catchError 00:23:43.561 [Pipeline] { 00:23:43.576 [Pipeline] echo 00:23:43.578 Cleanup processes 00:23:43.584 [Pipeline] sh 00:23:43.868 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:43.868 1738721 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:43.883 [Pipeline] sh 00:23:44.170 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:44.170 ++ grep -v 'sudo pgrep' 00:23:44.170 ++ awk '{print $1}' 00:23:44.170 + sudo kill -9 00:23:44.170 + true 00:23:44.237 [Pipeline] sh 00:23:44.570 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:52.707 [Pipeline] sh 00:23:52.991 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:52.991 Artifacts sizes are good 00:23:53.008 [Pipeline] archiveArtifacts 00:23:53.017 Archiving artifacts 00:23:53.135 [Pipeline] sh 00:23:53.424 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:23:53.439 [Pipeline] cleanWs 00:23:53.449 [WS-CLEANUP] Deleting project workspace... 00:23:53.449 [WS-CLEANUP] Deferred wipeout is used... 00:23:53.456 [WS-CLEANUP] done 00:23:53.457 [Pipeline] } 00:23:53.475 [Pipeline] // catchError 00:23:53.487 [Pipeline] sh 00:23:53.773 + logger -p user.info -t JENKINS-CI 00:23:53.783 [Pipeline] } 00:23:53.797 [Pipeline] // stage 00:23:53.803 [Pipeline] } 00:23:53.819 [Pipeline] // node 00:23:53.825 [Pipeline] End of Pipeline 00:23:53.863 Finished: SUCCESS